Jump to content

Search the Community

Showing results for tags 'best practices'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Calendars

  • DevOps Events

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. Virtual private networks (VPNs) form a staple of the modern work environment. VPNs provide an essential layer of protection for employees working remotely or across multiple office locations, encrypting data traffic to stop hackers from intercepting and stealing information. Usage of VPNs skyrocketed in the wake of the COVID-19 pandemic and remains high — 77% of employees use VPN for their work nearly every day, according to the 2023 VPN Risk Report by Zscaler. The post Best Practices to Strengthen VPN Security appeared first on Security Boulevard. View the full article
  2. This blog discusses the essentials of PCI DSS compliance, and the 5 best practices for maintaining compliance. The post The 5 Best Practices for PCI DSS Compliance appeared first on Scytale. The post The 5 Best Practices for PCI DSS Compliance appeared first on Security Boulevard. View the full article
  3. GitOps represents a transformative approach to managing and deploying applications within Kubernetes environments, offering many benefits ranging from automation to enhanced collaboration. By centralizing operations around Git repositories, GitOps streamlines processes, fosters reliability, and nurtures teamwork. However, as teams embrace GitOps principles, the natural question arises: can these principles extend to managing databases? The answer is a resounding yes! Yet, while GitOps seamlessly aligns with stateless application management, applying it to stateful workloads, especially databases, presents distinct challenges. In this article, we’ll delve into the landscape of implementing GitOps for stateful applications and databases in Kubernetes, exploring five essential best practices to navigate this terrain effectively. Considerations for applying GitOps to stateful applications Versioning and managing stateful data When applying GitOps principles, it’s crucial to version control persistent data alongside application code. Tools like Git LFS (Large File Storage) can help you manage large datasets efficiently. Ensure that changes to stateful data are captured in Git commits and properly documented to maintain data integrity and facilitate reproducibility. Handling database schema changes and migrations Database schema changes and migrations require careful handling in GitOps workflows. Define database schema changes as code and store migration scripts in version-controlled repositories. Test and apply migrations consistently across environments with automated tools and continuous integration/continuous delivery processes. Backup and disaster recovery strategies Develop robust backup and disaster recovery strategies for stateful applications. Regularly back up data and configuration files to resilient storage solutions. Test data recovery and automate backup procedures to ensure preparedness for unforeseen events or data loss. You can also leverage GitOps practices to manage backup configurations and version-controlled recovery plans. Managing migrations like any other GitOps application Migration should follow suit as applications are deployed and managed using GitOps principles. This means defining migration tasks as declarative configurations stored in Git repositories alongside other application artifacts. These migration configurations should specify the desired state of the database schema or data transformation, including any dependencies or prerequisites. GitOps operators, such as the Atlas Operator for databases, can then pull these migration configurations from Git repositories and apply them to target databases. The operator ensures that the database’s actual state aligns with the desired state defined in the Git repository, automating the process of applying migrations and maintaining consistency across environments. Handling stateful application upgrades and rollbacks Planned and executed stateful application upgrades and rollbacks carefully. Define upgrade strategies that minimize downtime and data loss during the migration process to minimize downtime and data loss. Utilize GitOps principles to manage version-controlled manifests for application upgrades, ensuring consistency and reproducibility across environments. Implement automated rollback mechanisms to revert to previous application versions in case of failures or issues during upgrades. Best practices for implementing GitOps with stateful applications Infrastructure as Code (IaC) for provisioning storage resources Make your storage resources part of your Infrastructure as Code (IaC) practices. Define your storage configurations using tools like Terraform or Kubernetes manifests, and keep them version-controlled alongside your application code. This ensures consistency and reproducibility in your infrastructure deployments. Using Helm charts or operators Simplify the deployment and management of your stateful applications, including databases, by using Helm charts or Kubernetes operators. Helm helps package and template complex configurations while operators automate common operational tasks. Pick the best fit for your needs and keep your application management consistent. Implementing automated testing Automate your testing as much as possible for your stateful applications, including database changes. Develop thorough test suites to check everything from functionality to performance. Tools like Kubernetes Testing Framework (KTF) can help simulate production-like environments and catch issues early on. Continuous (CI/CD) pipelines Set up CI/CD pipelines tailored to your stateful applications, focusing on databases. Automate your build, test, and deployment processes to ensure smooth operation. Remember to trigger pipeline executions based on version-controlled changes so you have consistent deployments across different environments. Storing everything in Git By storing everything in Git repositories, teams benefit from version control, traceability, and collaboration. Every change made to configurations or migrations is tracked, providing a clear history of modifications and enabling easy rollback to previous states if necessary. Moreover, Git’s branching and merging capabilities facilitate collaborative development efforts, allowing multiple team members to work concurrently on different features or fixes without stepping on each other’s toes. Mastering GitOps for stateful workloads Applying GitOps principles to stateful applications, including databases, brings numerous benefits to development and operations teams. By storing everything in Git repositories, including data, migration scripts, and configurations, teams ensure version control, traceability, and collaboration. Handling database schema changes, migrations, and backups within GitOps workflows ensures consistency and reliability across environments. Moreover, managing migrations and upgrades as part of GitOps applications streamlines deployment processes and reduces the risk of errors. Implementing best practices such as Infrastructure as Code (IaC), leveraging Helm charts or operators, and implementing automated testing and CI/CD pipelines further enhances the efficiency and reliability of managing stateful applications. By adopting GitOps for stateful applications, organizations can achieve greater agility, scalability, and resilience in their software delivery processes. With a solid foundation of GitOps principles and best practices in place, teams can confidently navigate the complexities of managing stateful applications in Kubernetes environments, enabling them to focus on efficiently delivering value to their customers. The post 5 Best practices for implementing GitOps for stateful applications and databases in Kubernetes appeared first on Amazic. View the full article
  4. Kubernetes has transformed container Orchestration, providing an effective framework for delivering and managing applications at scale. However, efficient storage management is essential to guarantee the dependability, security, and efficiency of your Kubernetes clusters. Benefits like data loss prevention, regulations compliance, and maintaining operational continuity mitigating threats underscore the importance of security and dependability. This post will examine the best practices for the top 10 Kubernetes storage, emphasizing encryption, access control, and safeguarding storage components. Kubernetes Storage Kubernetes storage is essential to contemporary cloud-native setups because it makes data persistence in containerized apps more effective. It provides a dependable and scalable storage resource management system that guarantees data permanence through migrations and restarts of containers. Among other capabilities, persistent Volumes (PVs) and Persistent Volume Claims (PVCs) give Kubernetes a versatile abstraction layer for managing storage. By providing dynamic provisioning of storage volumes catered to particular workload requirements, storage classes further improve flexibility. Organizations can build and manage stateful applications with agility, scalability, and resilience in various computing settings by utilizing Kubernetes storage capabilities. 1. Data Encryption Sensitive information kept in Kubernetes clusters must be protected with data encryption. Use encryption tools like Kubernetes Secrets to safely store sensitive information like SSH keys, API tokens, and passwords. Encryption both in transit and at rest is also used to further protect data while it is being stored and transmitted between nodes. 2. Use Secrets Management Tools Steer clear of hardcoding private information straight into Kubernetes manifests. Instead, use powerful secrets management solutions like Vault or Kubernetes Secrets to securely maintain and distribute secrets throughout your cluster. This guarantees that private information is encrypted and available only to approved users and applications. 3. Implement Role-Based Access Control (RBAC) RBAC allows you to enforce fine-grained access controls on your Kubernetes clusters. Define roles and permissions to limit access to storage resources using the least privilege concept. This lowers the possibility of data breaches and unauthorized access by preventing unauthorized users or apps from accessing or changing crucial storage components. 4. Secure Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) Ensure that claims and persistent volumes are adequately secured to avoid tampering or unwanted access. Put security rules in place to limit access to particular namespaces or users and turn on encryption for information on persistent volumes. PVs and PVCs should have regular audits and monitoring performed to identify and address any security flaws or unwanted entry attempts. 5. Enable Network Policies To manage network traffic between pods and storage resources, use Kubernetes network policies. To guarantee that only authorized pods and services may access storage volumes and endpoints, define firewall rules restricting communication to and from storage components. This reduces the possibility of data exfiltration and network-based assaults and prevents unauthorized network access. 6. Enable Role-Based Volume Provisioning Utilize Kubernetes’ dynamic volume provisioning features to automate storage volume creation and management. To limit users’ ability to build or delete volumes based on their assigned roles and permissions, utilize role-based volume provisioning. This guarantees the effective and safe allocation of storage resources and helps prevent resource abuse. 7. Utilize Pod Security Policies To specify and implement security restrictions on pods’ access to storage resources, implement pod security policies. To manage pod rights, host resource access, and storage volume interactions, specify security policies. By implementing stringent security measures, you can reduce the possibility of privilege escalation, container escapes, and illegal access to storage components. 8. Regularly Update and Patch Kubernetes Components Monitor security flaws by regularly patching and updating Kubernetes components, including storage drivers and plugins. Keep your storage infrastructure safe from new attacks and vulnerabilities by subscribing to security advisories and adhering to best practices for Kubernetes cluster management. 9. Monitor and Audit Storage Activity To keep tabs on storage activity in your Kubernetes clusters, put extensive logging, monitoring, and auditing procedures in place. To proactively identify security incidents or anomalies, monitor access logs, events, and metrics on storage components. Utilize centralized logging and monitoring systems to see what’s happening with storage in your cluster. 10. Conduct Regular Security Audits and Penetration Testing Conduct comprehensive security audits and penetration tests regularly to evaluate the security posture of your Kubernetes storage system. Find and fix any security holes, incorrect setups, and deployment flaws in your storage system before hackers can exploit them. Work with security professionals and use automated security technologies to thoroughly audit your Kubernetes clusters. Considerations Before putting suggestions for Kubernetes storage into practice, take into account the following: Evaluate Security Requirements: Match storage options with compliance and corporate security requirements. Assess Performance Impact: Recognize the potential effects that resource usage and application performance may have from access controls, encryption, and security rules. Identify Roles and Responsibilities: Clearly define who is responsible for what when it comes to managing storage components in Kubernetes clusters. Plan for Scalability: Recognize the need for scalability and the possible maintenance costs related to implementing security measures. Make Monitoring and upgrades a Priority: To ensure that security measures continue to be effective over time, place a strong emphasis on continual monitoring, audits, and upgrades. Effective storage management is critical for ensuring the security, reliability, and performance of Kubernetes clusters. By following these ten best practices for Kubernetes storage, including encryption, access control, and securing storage components, you can strengthen the security posture of your Kubernetes environment and mitigate the risk of data breaches, unauthorized access, and other security threats. Stay proactive in implementing security measures and remain vigilant against emerging threats to safeguard your Kubernetes storage infrastructure effectively. The post Mastering Kubernetes Storage: 10 Best Practices for Security and Efficiency appeared first on Amazic. View the full article
  5. Learn why you need to track in-app events, what to track and what not to track, and get a few pro tips on designing your event data.View the full article
  6. Learn how to successfully plan and instrument event tracking for your websites and applications to improve data quality at the source.View the full article
  7. Agile processes, prioritizing speed, quality, and efficiency, are gradually replacing traditional software development and deployment methods in the age of rapid software delivery. CI/CD has become a fundamental component of contemporary software development, allowing teams to automate the processes of building, testing, and deploying software. The pipeline facilitates the flow of code changes from development to production and is the central component of continuous integration and delivery (CI/CD). Recent studies like “State of DevOps” highlight that the most common DevOps practices are continuous integration (CI) and continuous delivery (CD), used by 80% of organizations. In addition to speeding up software delivery, automating the CI/CD process improves product quality by enabling frequent testing and feedback loops. However, rigorous preparation, strategy, and adherence to best practices are necessary for implementing efficient automation. This article explores 7 essential tactics and industry best practices for CI/CD pipeline automation. 1. Infrastructure as Code (IaC) Treating infrastructure like code is a cornerstone of contemporary DevOps methods. By putting infrastructure needs into code, teams may achieve consistency, reproducibility, and scalability in their environments. IaC tools like Terraform, CloudFormation, or Ansible are crucial for automating CI/CD pipelines. Strategy 1: Use declarative code to define the infrastructure needs for provisioning and configuring CI/CD pipeline resources, including deployment targets, build servers and testing environments. Best Practice: To preserve version history and facilitate teamwork, save infrastructure code in version control repositories with application code. 2. Containerization Containerization has completely transformed software packaging and deployment, as best demonstrated by platforms like Docker. Containers encapsulate an application and its dependencies, providing consistency between environments. Using containerization in CI/CD pipeline automation facilitates smooth deployment and portability. Strategy 2: As part of the continuous integration process, build Docker images to produce lightweight, portable artifacts that can be reliably deployed in a range of contexts. Best Practice: Use container orchestration systems such as Kubernetes to manage containerized applications in production and ensure scalability, robustness, and ease of deployment. 3. Test Automation One of the main components of CI/CD is automated testing, which helps teams to validate code changes quickly and accurately. Teams can reduce the risk of regressions and ensure software quality by automating tests at different stages of the development cycle, such as unit, acceptance, integration, and so on, and catching errors early. Strategy 3: Include automated tests in the continuous integration pipeline to verify code changes immediately after each contribution, giving engineers quick feedback. Best Practice: The best practice is to use a test pyramid method where speed and coverage are optimized by starting with more unit tests that run quickly and moving up to fewer but more comprehensive integration and end-to-end tests at higher levels. 4. Infrastructure Monitoring and Optimization Continuous monitoring of infrastructure performance is crucial for maintaining the reliability and efficiency of CI/CD pipelines. By leveraging monitoring tools such as Prometheus or Datadog, we can track resource utilization, identify issues, and optimize infrastructure configurations to enhance pipeline performance. Strategy 4: Implement automated infrastructure monitoring to track key performance metrics such as CPU usage, memory consumption, and network traffic, enabling proactive identification and resolution of issues that may impact performance. Best Practice: Utilize alerting mechanisms to notify teams of abnormal infrastructure behavior or performance degradation, facilitating rapid response and minimizing downtime in CI/CD pipelines. 5. Security Automation and Compliance Security is a paramount concern in any system, and integrating security practices into CI/CD pipelines is essential for mitigating risks and ensuring regulatory compliance. By automating security checks and compliance audits using tools like SonarQube or OWASP ZAP, we can detect vulnerabilities early in the development lifecycle and enforce security standards consistently. Strategy 5: Embed security scans and compliance checks into the CI/CD pipeline to automatically assess code quality, identify security vulnerabilities, and enforce security policies throughout the software delivery process. Best Practice: Integrate security testing tools with version control systems such as Jenkins or Git to perform automated code analysis on every commit, enabling developers to address security issues promptly and maintain a secure codebase. 6. Monitoring and Feedback Pipelines for continuous integration and delivery (CI/CD) are essential because they offer insight into the functionality and state of deployed applications. Teams may find bottlenecks, spot anomalies, and continuously increase the efficiency of the pipeline by gathering and evaluating metrics. Strategy 6: Use infrastructure and apps to record pertinent metrics and logs, allowing for proactive monitoring and troubleshooting. Best Practice: To guarantee that deployed applications fulfill performance and reliability requirements, incorporate monitoring and alerting technologies into the CI/CD pipeline. This will allow the pipeline to detect and respond to issues automatically. 7. Infrastructure Orchestration Orchestration is essential to automating CI/CD pipelines and controlling infrastructure as code. Delivery workflows are ensured by orchestration technologies such as Jenkins, CircleCI, or GitLab CI/CD, which facilitate the execution of pipeline steps, manage dependencies, and coordinate parallel operations. Strategy 7: Use CI/CD orchestration technologies to automate pipeline steps, such as code compilation, testing, deployment, and release, to coordinate complicated workflows smoothly. Best Practice: Defining pipeline stages and dependencies explicitly, optimizing execution order and resource use, and minimizing build times can foster a highly efficient CI/CD environment. Conclusion Automation lies at the core of successful CI/CD pipelines, enabling teams to deliver high-quality software quickly and at scale. By adopting the strategies and best practices outlined in this article, organizations can streamline their development workflows, reduce manual overhead, and foster a culture of collaboration and continuous improvement. Embracing automation accelerates time-to-market and enhances software delivery’s resilience, reliability, and overall quality in today’s fast-paced digital landscape. The post 7 Strategies and Best Practices for Automating CI/CD Pipelines appeared first on Amazic. View the full article
  8. Maintaining a strong security posture is crucial in today’s digital landscape, and it begins with users. Trusting users with access to sensitive data and company assets is a web of complexity, and one bad apple or security gap can knock all the dominos down. In fact, Verizon’s 2023 Data Breach Investigations Report noted that 74% […] The post 9 Best Practices for Using AWS Access Analyzer appeared first on Security Boulevard. View the full article
  9. Reading Time: 5 min Data privacy in email communication refers to the protection and confidentiality of personal data. Learn about data privacy regulations, particularly GDPR. The post Data Privacy in Email Communication: Compliance, Risks, and Best Practices appeared first on Security Boulevard. View the full article
  10. How to bridge the dev / data divide through alignment, collaboration, early enforcement, and transparency.View the full article
  11. MySQL provides several replication configuration options. However, ensuring it is done correctly may take time and effort, with considerable choices. Replication is a crucial initial step to enhance availability in MySQL databases. A properly designed replication architecture can significantly impact the accessibility of your data and prevent potential management complications. This article will delve into […] View the full article
  12. AI has become an integral part of my workflow these days, and with the assistance of GitHub Copilot, I move a lot faster when I’m building a project. Having used AI tools to increase my productivity over the past year, I’ve realized that similar to learning how to use a new framework or library, we can enhance our efficiency with AI tools by learning how to best use them. In this blog post, I’ll share some of the daily things I do to get the most out of GitHub Copilot. I hope these tips will help you become a more efficient and productive user of the AI assistant. Beyond code completion To make full use of the power of GitHub Copilot, it’s important to understand its capabilities. GitHub Copilot is developing rapidly, and new features are being added all the time. It’s no longer just a code completion tool in your editor—it now includes a chat interface that you can use in your IDE, a command line tool via a GitHub CLI extension, a summary tool in your pull requests, a helper tool in your terminals, and much, much more. In a recent blog post, I’ve listed some of the ways you didn’t know you could use GitHub Copilot. This will give you a great overview of how much the AI assistant can currently do. But beyond interacting with GitHub Copilot, how do you help it give you better answers? Well, the answer to that needs a bit more context. Context, context, context If you understand Large Language Models ( LLMs), you will know that they are designed to make predictions based on the context provided. This means, the more contextually rich our input or prompt is, the better the prediction or output will be. As such, learning to provide as much context as possible is key when interacting with GitHub Copilot, especially with the code completion feature. Unlike ChatGPT where you need to provide all the data to the model in the prompt window, by installing GitHub Copilot in your editor, the assistant is able to infer context from the code you’re working on. It then uses that context to provide code suggestions. We already know this, but what else can we do to give it additional context? I want to share a few essential tips with you to provide GitHub Copilot with more context in your editor to get the most relevant and useful code out of it: 1. Open your relevant files Having your files open provides GitHub Copilot with context. When you have additional files open, it will help to inform the suggestion that is returned. Remember, if a file is closed, GitHub Copilot cannot see the file’s content in your editor, which means it cannot get the context from those closed files. GitHub Copilot looks at the current open files in your editor to analyze the context, create a prompt that gets sent to the server, and return an appropriate suggestion. Have a few files open in your editor to give GitHub Copilot a bigger picture of your project. You can also use #editor in the chat interface to provide GitHub Copilot with additional context on your currently opened files in Visual Studio Code (VS Code) and Visual Studio. https://github.blog/wp-content/uploads/2024/03/01_editor_command_open_files.mp4 Remember to close unneeded files when context switching or moving on to the next task. 2. Provide a top-level comment Just as you would give a brief, high-level introduction to a coworker, a top-level comment in the file you’re working in can help GitHub Copilot understand the overall context of the pieces you will be creating—especially if you want your AI assistant to generate the boilerplate code for you to get going. Be sure to include details about what you need and provide a good description so it has as much information as possible. This will help to guide GitHub Copilot to give better suggestions, and give it a goal on what to work on. Having examples, especially when processing data or manipulation strings, helps quite a bit. 3. Set Includes and references It’s best to manually set the includes/imports or module references you need for your work, particularly if you’re working with a specific version of a package. GitHub Copilot will make suggestions, but you know what dependencies you want to use. This can also help to let GitHub Copilot know what frameworks, libraries, and their versions you’d like it to use when crafting suggestions. This can be helpful to jump start GitHub Copilot to a newer library version when it defaults to providing older code suggestions. https://github.blog/wp-content/uploads/2024/03/03_includes_references.mp4 4. Meaningful names matter The name of your variables and functions matter. If you have a function named foo or bar, GitHub Copilot will not be able to give you the best completion because it isn’t able to infer intent from the names. Just as the function name fetchData() won’t mean much to a coworker (or you after a few months), fetchData() won’t mean much to GitHub Copilot either. Implementing good coding practices will help you get the most value from GitHub Copilot. While GitHub Copilot helps you code and iterate faster, remember the old rule of programming still applies: garbage in, garbage out. 5. Provide specific and well- scoped function comments Commenting your code helps you get very specific, targeted suggestions. A function name can only be so descriptive without being overly long, so function comments can help fill in details that GitHub Copilot might need to know. One of the neat features about GitHub Copilot is that it can determine the correct comment syntax that is typically used in your programming language for function / method comments and will help create them for you based on what the code does. Adding more detail to these as the first change you do then helps GitHub Copilot determine what you would like to do in code and how to interact with that function. Remember: Single, specific, short comments help GitHub Copilot provide better context. https://github.blog/wp-content/uploads/2024/03/05_simple_specific_short.mp4 6. Provide sample code Providing sample code to GitHub Copilot will help it determine what you’re looking for. This helps to ground the model and provide it with even more context. It also helps GitHub Copilot generate suggestions that match the language and tasks you want to achieve, and return suggestions based on your current coding standards and practices. Unit tests provide one level of sample code at the individual function/method level, but you can also provide code examples in your project showing how to do things end to end. The cool thing about using GitHub Copilot long-term is that it nudges us to do a lot of the good coding practices we should’ve been doing all along. Learn more about providing context to GitHub Copilot by watching this Youtube video: Inline Chat with GitHub Copilot Inline chat Outside of providing enough context, there are some built-in features of GitHub Copilot that you may not be taking advantage of. Inline chat, for example, gives you an opportunity to almost chat with GitHub Copilot between your lines of code. By pressing CMD + I (CTRL + I on Windows) you’ll have Copilot right there to ask questions. This is a bit more convenient for quick fixes instead of opening up GitHub Copilot Chat’s side panel. https://github.blog/wp-content/uploads/2024/03/07_a_inline_chat_animated.mp4 This experience provides you with code diffs inline, which is awesome. There are also special slash commands available like creating documentation with just the slash of a button! Tips and tricks with GitHub Copilot Chat GitHub Copilot Chat provides an experience in your editor where you can have a conversation with the AI assistant. You can improve this experience by using built-in features to make the most out of it. 8. Remove irrelevant requests For example, did you know that you can delete a previously asked question in the chat interface to remove it from the indexed conversation? Especially if it is no longer relevant? Doing this will improve the flow of conversation and give GitHub Copilot only the necessary information needed to provide you with the best output. 9. Navigate through your conversation Another tip I found is to use the up and down arrows to navigate through your conversation with GitHub Copilot Chat. I found myself scrolling through the chat interface to find that last question I asked, then discovered I can just use my keyboard arrows just like in the terminal! https://github.blog/wp-content/uploads/2024/03/09_up_down_arrows_animated.mp4 10. Use the @workspace agent If you’re using VS Code or Visual Studio, remember that agents are available to help you go even further. The @workspace agent for example, is aware of your entire workspace and can answer questions related to it. As such, it can provide even more context when trying to get a good output from GitHub Copilot. https://github.blog/wp-content/uploads/2024/03/10_workspace_agent.mp4 11. Highlight relevant code Another great tip when using GitHub Copilot Chat is to highlight relevant code in your files before asking it questions. This will help to give targeted suggestions and just provides the assistant with more context into what you need help with. 12. Organize your conversations with threads You can have multiple ongoing conversations with GitHub Copilot Chat on different topics by isolating your conversations with threads. We’ve provided a convenient way for you to start new conversations (thread) by clicking the + sign on the chat interface. 13. Slash Commands for common tasks Slash commands are awesome, and there are quite a few of them. We have commands to help you explain code, fix code, create a new notebook, write tests, and many more. They are just shortcuts to common prompts that we’ve found to be particularly helpful in day-to-day development from our own internal usage. Command Description Usage /explain Get code explanations Open file with code or highlight code you want explained and type: /explain what is the fetchPrediction method? /fix Receive a proposed fix for the problems in the selected code Highlight problematic code and type: /fix propose a fix for the problems in fetchAirports route /tests Generate unit tests for selected code Open file with code or highlight code you want tests for and type: /tests /help Get help on using Copilot Chat Type: /help what can you do? /clear Clear current conversation Type: /clear /doc Add a documentation comment Highlight code and type: /doc You can also press CMD+I in your editor and type /doc/ inline /generate Generate code to answer your question Type: /generate code that validates a phone number /optimize Analyze and improve running time of the selected code Highlight code and type: /optimize fetchPrediction method /clear Clear current chat Type: /clear /new Scaffold code for a new workspace Type: /new create a new django app /simplify Simplify the selected code Highlight code and type: /simplify /feedback Provide feedback to the team Type: /feedback See the following image for commands available in VS Code: 14. Attach relevant files for reference In Visual Studio and VS Code, you can attach relevant files for GitHub Copilot Chat to reference by using #file. This scopes GitHub Copilot to a particular context in your code base and provides you with a much better outcome. To reference a file, type # in the comment box, choose #file and you will see a popup where you can choose your file. You can also type #file_name.py in the comment box. See below for an example: https://github.blog/wp-content/uploads/2024/03/14_attach_filename.mp4 15. Start with GitHub Copilot Chat for faster debugging These days whenever I need to debug some code, I turn to GitHub Copilot Chat first. Most recently, I was implementing a decision tree and performed a k-fold cross-validation. I kept getting the incorrect accuracy scores and couldn’t figure out why. I turned to GitHub Copilot Chat for some assistance and it turns out I wasn’t using my training data set (X_train, y_train), even though I thought I was: I'm catching up on my AI/ML studies today. I had to implement a DecisionTree and use the cross_val_score method to evaluate the model's accuracy score. I couldn't figure out why the incorrect values for the accuracy scores were being returned, so I turned to Chat for some help pic.twitter.com/xn2ctMjAnr — Kedasha is learning about AI + ML (@itsthatladydev) March 23, 2024 I figured this out a lot faster than I would’ve with external resources. I want to encourage you to start with GitHub Copilot Chat in your editor to get debugging help faster instead of going to external resources first. Follow my example above by explaining the problem, pasting the problematic code, and asking for help. You can also highlight the problematic code in your editor and use the /fix command in the chat interface. Be on the lookout for sparkles! In VS Code, you can quickly get help from GitHub Copilot by looking out for “magic sparkles.” For example, in the commit comment section, clicking the magic sparkles will help you generate a commit message with the help of AI. You can also find magic sparkles inline in your editor as you’re working for a quick way to access GitHub Copilot inline chat. https://github.blog/wp-content/uploads/2024/03/15_magic_sparkles.mp4 Pressing them will use AI to help you fill out the data and more magic sparkles are being added where we find other places for GitHub Copilot to help in your day-to-day coding experience. Know where your AI assistant shines To get the best and most out of the tool, remember that context and prompt crafting is essential to keep in mind. Understanding where the tool shines best is also important. Some of the things GitHub Copilot is very good at include boilerplate code and scaffolding, writing unit tests, writing documentation, pattern matching, explaining uncommon or confusing syntax, cron jobs, and regex, and helping you remember things you’ve forgotten and debugging. But never forget that you are in control, and GitHub Copilot is here as just that, your copilot. It is a tool that can help you write code faster, and it’s up to you to decide how to best use it. It is not here to do your work for you or to write everything for you. It will guide you and nudge you in the right direction just as a coworker would if you asked them questions or for guidance on a particular issue. I hope these tips and best practices were helpful. You can significantly improve your coding efficiency and output by properly leveraging GitHub Copilot. Learn more about how GitHub Copilot works by reading Inside GitHub: Working with the LLMs behind GitHub Copilot and Customizing and fine-tuning LLMs: What you need to know. Harness the power of GitHub Copilot. Learn more or get started now.
  13. In today's fast-paced world of technology, efficient application deployment and management are crucial. Kubernetes, a game-changing platform for container orchestration, is at the forefront of this evolution. At Atmosly, we leverage Kubernetes to empower organizations in navigating the rapidly evolving digital landscape, offering solutions that intertwine with Kubernetes' strengths to enhance your technological capabilities. What Is Kubernetes? Kubernetes, or K8s, is a revolutionary container orchestration system born at Google. It has become a cornerstone of contemporary IT infrastructure, providing robust solutions for deploying, scaling, and managing containerized applications. At Atmosly, we integrate Kubernetes into our offerings, ensuring our clients benefit from its scalable, reliable, and efficient nature. View the full article
  14. Learn about the benefits, best practices and see an example of a payroll register in this comprehensive guide.View the full article
  15. In the dynamic realm of software development and deployment, Docker has emerged as a cornerstone technology, revolutionizing the way developers package, distribute, and manage applications. Docker simplifies the process of handling applications by containerizing them, ensuring consistency across various computing environments. A critical aspect of Docker that often puzzles many is Docker networking. It’s an essential feature, enabling containers to communicate with each other and the outside world. This ultimate guide aims to demystify Docker networking, offering you tips, tricks, and best practices to leverage Docker networking effectively. Understanding Docker Networking Basics Docker networking allows containers to communicate with each other and with other networks. Docker provides several network drivers, each serving different use cases: View the full article
  16. There are several steps involved in implementing a data pipeline that integrates Apache Kafka with AWS RDS and uses AWS Lambda and API Gateway to feed data into a web application. Here is a high-level overview of how to architect this solution: 1. Set Up Apache Kafka Apache Kafka is a distributed streaming platform that is capable of handling trillions of events a day. To set up Kafka, you can either install it on an EC2 instance or use Amazon Managed Streaming for Kafka (Amazon MSK), which is a fully managed service that makes it easy to build and run applications that use Apache Kafka to process streaming data. View the full article
  17. A swift esponse to a major outage can make a big difference in regards to retaining customer confidence.View the full article
  18. Because of the critical nature of the DevOps pipeline, security is becoming a top priority. Here's how to integrate DevSecOps.View the full article
  19. Docker images play a pivotal role in containerized application deployment. They encapsulate your application and its dependencies, ensuring consistent and efficient deployment across various environments. However, security is a paramount concern when working with Docker images. In this guide, we will explore security best practices for Docker images to help you create and maintain secure images for your containerized applications. 1. Introduction The Significance of Docker Images Docker images are at the core of containerization, offering a standardized approach to packaging applications and their dependencies. They allow developers to work in controlled environments and empower DevOps teams to deploy applications consistently across various platforms. However, the advantages of Docker images come with security challenges, making it essential to adopt best practices to protect your containerized applications. View the full article
  20. Private Service Connect is a Cloud Networking offering that creates a private and secure connection from your VPC networks to a service producer, and is designed to help you consume services faster, protect your data, and simplify service management. However, like all complex networking setups, sometimes things don’t work as planned. In this post, you will find useful tips that can help you to tackle issues related to Private Service Connect, even before reaching out to Cloud Support. Introduction to Private Service Connect Before we get into the troubleshooting bits, let’s briefly discuss the basics of Private Service Connect. Understanding your setup is key for isolating the problem. Private Service Connect is similar to private services access, except that the service producer VPC network doesn't connect to your (consumer) network using VPC network peering. A Private Service Connect service producer can be Google, a third-party, or even yourself. When we talk about consumers and producers, it's important to understand what type of Private Service Connect is configured on the consumer side and what kind of managed service it intends to connect with on the producer side. Consumers are the ones who want the services, while producers are the ones who provide them. The various types of Private Service Connect configurations are: Private Service Connect endpoints are configured as forwarding rules which are allocated with an IP address and it is mapped to a managed service by targeting a Google API bundle or a service attachment. These managed services can be diverse, ranging from global Google APIs to Google Managed Services, third-party services, and even in-house, intra-organization services. When a consumer creates an endpoint that references a Google APIs bundle, the endpoint's IP address is a global internal IP address – the consumer picks an internal IP address that's outside all subnets of the consumer's VPC network and connected networks. When a consumer creates an endpoint that references a service attachment, the endpoint's IP address is a regional internal IP address in the consumer's VPC network – from a subnet in the same region as the service attachment. Private Service Connect backends are configured with a special Network Endpoint Group of the type Private Service Connect which refers to a locational Google API, or to a published service service attachment. A service attachment is your link to a compatible producer load balancer. And Private Service Connect interfaces, a special type of network interface that allows service producers to initiate connections to service consumers. How Private Service Connect works Network Address Translation (NAT) is the underlying network technology that powers up Private Service Connect using Google Cloud’s software-defined networking stack called Andromeda. Let's break down how Private Service Connect works to access a published service based on an internal network-passthrough load balancer using a connect endpoint. In this scenario, you set up a Private Service Connect endpoint on the consumer side by configuring a forwarding rule that targets a service attachment. This endpoint has an IP address within your VPC network. When a VM instance in the VPC network sends traffic to this endpoint, the host’s networking stack will apply client-side load balancing to send the traffic to a destination host based on the location, load and health.The packets are encapsulated and routed through Google Cloud’s network fabric.At the destination host, the packet processor will apply Source Network Address Translation (SNAT) and Destination Network Address Translation (DNAT) using the NAT subnet configured and the producer IP address of the service, respectively.The packet is delivered to the VM instance serving as the load balancer’s backend.All of this is orchestrated by Andromeda’s control plane; with a few exceptions, there are no middle box or intermediaries involved in this process, enabling you to achieve line rate performance. For additional details, see Private Service Connect architecture and performance. With this background, you should be already able to identify the main components where issues could occur: the source host, the network fabric, the destination host, and the control-plane. Know your troubleshooting toolsThe Google Cloud console provides you with the following tools to troubleshoot most of the Private Service Connect issues that you might encounter. Connectivity TestConnectivity Tests is a diagnostics tool that lets you check connectivity between network endpoints. It analyzes your configuration and, in some cases, performs live data-plane analysis between the endpoints. Configuration Analysis supports Private Service Connect: Consumers can check connectivity from their source systems to PSC endpoints (or consumer load balancers using PSC NEG backends), while producers can verify that their service is operational for consumers. Live Data Plane Analysis supports both Private Service Connect endpoints for published services and Google APIs: Verify reachability and latency between hosts by sending probe packets over the data plane. This feature provides baseline diagnostics of latency and packet loss. In cases where Live Data Plane Analysis is not available, consumers can coordinate with a service producer to collect simultaneous packet captures at the source and destination using tcpdump. Cloud Logging Cloud Logging is a fully managed service that allows you to store, search, analyze, monitor, and alert on logging data and events. Audit logs allow you to monitor Private Service Connect activity. Use them to track intentional or unintentional changes to Private Service Connect resources, find any errors or warnings and monitor changes in connection status for the endpoint. These are mostly useful when troubleshooting issues during the setup or updates in the configuration. In this example, you can track endpoint connection status changes (pscConnectionStatus) by examining audit logs for your GCE forwarding rule resource: code_block <ListValue: [StructValue([('code', 'resource.type="gce_forwarding_rule"\r\nprotoPayload.methodName="LogPscConnectionStatusUpdate"'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ef828a856a0>)])]> VPC Flow Logs to monitor Private Service Connect traffic. Consumers can enable VPC Flow Logs at the client subnet to monitor traffic flow directed to the Private Service Connect endpoint. This allows the consumer to validate traffic egressing the VM instance. Producers can enable VPC Flow Logs at the target load balancer subnet to monitor traffic ingressing their VM instances backends. Consider that VPC Flow Logs are sampled and may not capture short-lived connections. To get more detailed information, run a packet capture using tcpdump. Cloud Monitoring Another member of the observability stack, Cloud Monitoring can help you to gain visibility into the performance of Private Service Connect. Another member of the observability stack, Cloud Monitoring can help you to gain visibility into the performance of Private Service Connect. Producer metrics to monitor Published services. Take a look at the utilization of service attachment resources like NAT ports, connected forwarding rules and connections by service attachment ID to correlate with connectivity and performance issues. See if there are any dropped packets at the producer side (Preview feature). Received packets dropped count are related to NAT resource exhaustion. Sent packets dropped count indicate that a service backend is sending packets to a consumer after the NAT translation state has expired. When this occurs, make sure you are following the NAT subnets recommendations. A packet capture could bring more insights on the nature of the dropped packets. Using this MQL query, producers can monitor NAT subnet capacity for a specific service attachment: code_block <ListValue: [StructValue([('code', 'fetch gce_service_attachment\r\n| metric\r\n \'compute.googleapis.com/private_service_connect/producer/used_nat_ip_addresses\'\r\n| filter (resource.region == "us-central1"\r\n && resource.service_attachment_id == "[SERVICE_ATTACHMENT_ID]")\r\n| group_by 1m,\r\n [value_used_nat_ip_addresses_mean: mean(value.used_nat_ip_addresses)]\r\n| every 1m\r\n| group_by [resource.service_attachment_id],\r\n [value_used_nat_ip_addresses_mean_mean:\r\n mean(value_used_nat_ip_addresses_mean)]'), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ef828a856d0>)])]> Consumer metrics to monitor endpoints. You can track the number of connections created, opened and closed from clients to the Private Service Connect endpoint. If you see packet drops, take a look at the producer metrics as well. For more information, see Monitor Private Service Connect connections. TIP: Be proactive and set alerts to inform you when you are close to exhausting a known limit (including Private Service Connect quotas). In this example, you can use this MQL query to track PSC Internal LB Forwarding Rules quota usage. code_block <ListValue: [StructValue([('code', "fetch compute.googleapis.com/VpcNetwork\r\n| metric\r\n 'compute.googleapis.com/quota/psc_ilb_consumer_forwarding_rules_per_producer_vpc_network/usage'\r\n| group_by 1m, [value_usage_mean: mean(value.usage)]\r\n| every 1m\r\n| group_by [], [value_usage_mean_aggregate: aggregate(value_usage_mean)]"), ('language', ''), ('caption', <wagtail.rich_text.RichText object at 0x3ef828a85130>)])]> Read the manualConsult the Google Cloud documentation to learn about the limitations and supported configurations. Follow the Private Service Connect guides. Especially for new deployments, it is common to misconfigure a component or find that it is not compatible or supported yet. Ensure that you have gone through the right configuration steps, and go through the limitations and compatibility matrix.Take a look at the VPC Release notes. See if there are any known issues related to Private Service Connect, and look for any new features that could have introduced unwanted behavior. Common issuesSelecting the right tool depends on the specific situation you encounter and where you are in the life cycle of your Private Service Connect journey. Before you start, gather consumer and producer project details, and that in fact, this is a Private Service Connect issue, and not a Private services access problem. Generally, you can face issues during setup or update of any related component or additional capability, or the issues could be present during runtime, when everything is configured but you run into connectivity or performance issues. Issues during setupMake sure that you are following the configuration guide and you have an understanding of the scope and limitations. Check for any error message or warning in the Logs Explorer.Verify that the setup is compatible and supported as per the configuration guides.See if there is any related quota exceeded like the Private Service Connect forwarding rules.Confirm whether there is an organization policy that could prevent the configuration of Private Service Connect components.Issues during runtimeIsolate the issue to the consumer or the producer side of the connection. If you are on the consumer side, check if your endpoint or backend is accepted in the connection status at the Private Service Connect page. Otherwise, review in the producer side the accept/reject connection list and the connection reconciliation setup.If your endpoint is unreachable, check bypassing DNS resolution and run a Connectivity Test to validate routes and firewalls from the source endpoint IP address to the PSC endpoint as destination. On the service producer side, check if the producer service is reachable within the producer VPC network, and from an IP address in the Private Service Connect NAT subnet.If there is a performance issue like network latency or packet drops, check if Live Data Plane Analysis is available to determine a baseline and isolate an issue with the application or service. Also, check the Metrics Explorer for any connections or port exhaustion and packet drops.Working with Cloud SupportOnce that you have pinpointed the issue and you have analyzed the problem, you may need to reach out to Cloud Support for further assistance. To facilitate a smooth experience, be sure to explain your needs, clearly describe the business impact and give enough context with all the information collected. View the full article
  21. DevOps is a popular workflow approach that helps companies speed up software development and delivery by having developers and operations teams work together. Hybrid environments.....View the full article
  22. Developing and running secure Docker applications demands a strategic approach, encompassing considerations like avoiding unnecessary bloat in images and access methods. One crucial aspect to master in Docker development is understanding image layering and optimization. Docker images are constructed using layers, each representing specific changes or instructions in the image’s build process. In this article, we’ll delve into the significance of Docker image layering, the importance of choosing minimal base images, and practical approaches like multi-stage builds. Additionally, we’ll discuss the critical practices of running applications as non-root users, checking images for vulnerabilities using tools like Docker Scout, and implementing Docker Content Trust for image integrity. This comprehensive guide aims to equip developers and operators with actionable insights to enhance the security and efficiency of Docker applications. Understanding Docker image layering Before we jump into Docker security aspects, we need to understand Docker image layering and optimization. For a better understanding, let’s consider this Dockerfile, retrieved from a sample repository. It’s a simple React program that prints “Hello World.” The core code uses React, a JavaScript library for building user interfaces. Docker images comprise layers, and each layer represents a set of file changes or instructions in the image’s construction. These layers are stacked on each other to form the complete image (Figure 1). To combine them, a “unioned filesystem” is created, which basically takes all of the layers of the image and overlays them together. These layers are immutable. When you’re building an image, you’re simply creating new filesystem diffs, not modifying previous layers. Figure 1: Visual representation of layers in a Docker image. When you build a Docker image, each instruction in your Dockerfile creates a new layer. Layers are cached, so if you make a change in your code and rebuild the image, only the layers affected by that change will be recreated, saving time and bandwidth. This layering system makes images efficient to use. You might notice that there are two COPY instructions (as shown in Figure 1). The first COPY instruction copies only package.json (and potentially package-lock.json) into the image. The second COPY instruction copies the remaining application code (excluding files already copied in the first COPY command). If only application code changes, the first two layers are cached, avoiding re-downloading and reinstalling dependencies, which can significantly speed up builds. 1. Choose a minimal base image Docker Hub has millions of images, and choosing the right image for your application is important. It is always better to consider a minimal base image with a small size, as slimmer images contain fewer dependencies, resulting in less surface area to attract. Not only does a smaller image improve your image security, but it also reduces the time for pulling and pushing images and optimizing the overall development lifecycle. Figure 2: Example of Docker images with different sizes. As depicted in Figure 2, we opted for the node:21.6-alpine3.18 image due to its smaller footprint. We selected the Alpine image for our Node application below because it omits additional tools and packages present in the default Node image. This decision aligns with good security practices, as it minimizes the attack surface by eliminating unnecessary components for running your application. # Use the official Node.js image with Alpine Linux as the base image FROM node:21.6-alpine3.18 # Set the working directory inside the container to /app WORKDIR /app # Copy package.json and package-lock.json to the working directory COPY package*.json ./ # Install Node.js dependencies based on the package.json RUN npm install # Copy all files from the current directory to the working directory in the container COPY . . # Expose port 3000 EXPOSE 3000 # Define the command to run your application when the container starts CMD ["npm", "start"] 2. Use multi-stage builds Multi-stage builds offer a great way to streamline Docker images, making them smaller and more secure. They allow us to trim down a hefty 1.9 GB image to a lean 140 MB by using different build stages. In this approach, we leverage multiple FROM statements and carefully pick only the necessary pieces from one stage to another. We have converted our Dockerfile to a multi-stage one (Figure 3). In the first stage, we use a Node.js image to build the app, manage dependencies, and create application files (see the Dockerfile below). In the second stage, we copy the lightweight files generated in the first step and use Nginx to run them. We skip the build tool required to build the app in the final stage. This is why the final image is small and suitable for the production environment. Also, this is a great representation that we don’t need the heavyweight system on which we build; we can copy them to a lighter runner to run the app. Figure 3: High-level representation of Docker multi-stage build. # Stage 1: Build the application FROM node:21.6-alpine3.18 AS builder # Set the working directory for the build stage WORKDIR /app # Copy package.json and package-lock.json COPY package*.json ./ # Install dependencies RUN npm install # Copy the application source code into the container COPY . . # Build the application RUN npm run build # Stage 2: Create the final image FROM nginx:1.20 # Set the working directory within the container WORKDIR /app # Copy the built application files from the builder stage to the nginx html directory COPY --from=builder /app/build /usr/share/nginx/html # Expose port 80 for the web server EXPOSE 80 # Start nginx in the foreground CMD ["nginx", "-g", "daemon off;"] You can access this Dockerfile directly from a repository on GitHub. 3. Check your images for vulnerabilities using Docker Scout Let’s look at the following multi-stage Dockerfile: # Stage 1: Build the application FROM node:21.6-alpine3.18 AS builder # Set the working directory for the build stage WORKDIR /app # Copy package.json and package-lock.json COPY package*.json ./ # Install dependencies RUN npm install # Copy the application source code into the container COPY . . # Build the application RUN npm run build # Stage 2: Create the final image FROM nginx:1.20 # Set the working directory within the container WORKDIR /app # Copy the built application files from the builder stage to the nginx html directory COPY --from=builder /app/build /usr/share/nginx/html # Expose port 80 for the web server EXPOSE 80 # Start nginx in the foreground CMD ["nginx", "-g", "daemon off;"] You can run the following command to build a Docker image: docker build -t react-app-multi-stage . -f Dockerfile.multi Once the build process is complete, the CLI lets you view a summary of image vulnerabilities and recommendations. That’s what Docker Scout is all about. => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:f348bcb19411fa1c4abf2e682f3dded7963c0c0c9b39c31804df5cd0e0f185d9 0.0s => => naming to docker.io/library/react-node-app 0.0s View build details: docker-desktop://dashboard/build/desktop-linux/desktop-linux/sci2bo7xihgwnfihigd8x9uh1 What's Next? View a summary of image vulnerabilities and recommendations → docker scout quickview Docker Scout analyzes the contents of container images and generates a report of packages and vulnerabilities that it detects, helping users to identify and remediate issues. Docker Scout image analysis is more than point-in-time scanning; the analysis gets reevaluated continuously, meaning you don’t need to re-scan the image to see an updated vulnerability report. If your base image has a security concern, Docker Scout will check for updates and patches to suggest how to replace the image. If issues exist in other layers, Docker Scout will reveal precisely where it was introduced and make recommendations accordingly (Figure 4). Figure 4: How Docker Scout works. Docker Scout uses Software Bills of Materials (SBOMs) to cross-reference with streaming Common Vulnerabilities and Exposures (CVE) data to surface vulnerabilities (and potential remediation recommendations) as soon as possible. An SBOM is a nested inventory, a list of ingredients that make up software components. Docker Scout is built on a streaming event-driven data model, providing actionable CVE reports. Once the SBOM is generated and exists, Docker Scout automatically checks between existing SBOMs and new CVEs. You will see automatic updates for new CVEs without re-scanning artifacts. After building the image, we will open Docker Desktop (ensure you have the latest version installed), analyze the level of vulnerabilities, and fix them. We can also use Docker Scout from the Docker CLI, but Docker Desktop gives you a better way to visualize the stuff. Select Docker Scout from the sidebar and choose the image. Here, we have chosen the react-app-multi-stage, which we built just now. As you can see, Scout immediately shows vulnerabilities and their level. We can select View packages and CVEs beside that to take a deep look and get recommendations (Figure 5). Figure 5: Docker Scout tab in Docker Desktop. Now, a window will open, which shows you a detailed report about the vulnerabilities and layer-wise breakdown (Figure 6). Figure 6: Detailed report of vulnerabilities. To get recommendations to fix the image vulnerabilities, select Recommended Fixes in the top-right corner, and a dialog box will open with the recommended fixes. As shown in Figure 7, it recommends upgrading Nginx from version 1.20 to 1.24, which has fewer vulnerabilities and fixes all the critical and higher-level issues. Also, a good thing to note is that even though version 1.25 was available, it still recommends version 1.24 because 1.25 has critical vulnerabilities compared to 1.24. Figure 7: Recommendation tab for fixing vulnerabilities in Docker Desktop. Now, we need to rebuild our image by changing the base image of the final stage to the recommended version 1.24 (Figure 8), which will fix those vulnerabilities. Figure 8: Advanced image analysis with Docker Scout. The key features and capabilities of Docker Scout include: Unified view: Docker Scout provides a single view of your application’s dependencies from all layers, allowing you to easily understand your image composition and identify remediation steps. Event-driven vulnerability updates: Docker Scout uses an event-driven data model to continuously detect and surface vulnerabilities, ensuring that analysis is always up-to-date and based on the latest CVEs. In-context remediation recommendations: Docker Scout provides integrated recommendations visible in Docker Desktop, suggesting remediation options for base image updates and dependency updates within your application code layers. Note that Docker Scout is available through multiple interfaces, including the Docker Desktop and Docker Hub user interfaces, as well as a web-based user interface and a command-line interface (CLI) plugin. Users can view and interact with Docker Scout through these interfaces to gain a deeper understanding of the composition and security of their container images. 4. Use Docker Content Trust Docker Content Trust (DCT) lets you sign and verify Docker images, ensuring they come from trusted sources and haven’t been tampered with. This process acts like a digital seal of approval for images, whether signed by people or automated processes. To enable Docker Content Trust, follow these steps: Initialize Docker Content Trust Before you can sign images, ensure that Docker Content Trust is initialized. Open a terminal and run the following command: export DOCKER_CONTENT_TRUST=1 Sign the Docker image Sign the Docker image using the following command: docker build -t <your_namespace>/node-app docker trust sign <your_namespace>/node-app ... v1.0: digest: sha256:5fa48a9b4e52a9d9681a5786b4885be080668d06019e91eece6dfded5a0f8a47 size: 1986 Signing and pushing trust metadata Enter passphrase for <namespace> key with ID 96c9857: Successfully signed docker.io/<your_namespace/node-app:v1.0 Push the signed image to a registry You can push the signed Docker image to a registry with: docker push <your_namespace/node-app:v1.0 Verify the signature To verify the signature of an image, use the following command: docker trust inspect --pretty <your_namespace>/node-app:v1.0 Signatures for your_namespace/node-app:v1.0 SIGNED TAG DIGEST SIGNERS v1.0 5fa48a9b4e52a9d968XXXXXX19e91eece6dfded5a0f8a47 <your_namespace> List of signers and their keys for <your_namespace>/node-app:v1.0 SIGNER KEYS ajeetraina 96c985786950 Administrative keys for <your_namespace>/node-app:v1.0 Repository Key: 47214511f851e28018a7b0443XXXXXXc7d5846bf6f7 Root Key: 52bae142a9ac98a473c5275bXXXXXX2f4f5068081d567903dd By following these steps, you’ve enabled Docker Content Trust for your Node.js application, signing and verifying the image to enhance security and ensure the integrity of your containerized application throughout its lifecycle. 5. Practice least privileges Security is crucial in containerized environments. Embracing the principle of least privilege ensures that Docker containers operate with only the necessary permissions, thereby reducing the attack surface and mitigating potential security risks. Let’s explore specific best practices for achieving least privilege in Docker. Run as non-root user We minimize potential risks by running applications without unnecessary high-level access (root privileges). Many applications don’t need root privileges. So, in the Dockerfile, we can create a non-root system user to run the application inside the container with the limited privileges of the non-root user, improving security and holding to the principle of least privilege. # Stage 1: Build the application FROM node:21.6-alpine3.18 AS builder # Set the working directory for the build stage WORKDIR /app # Copy package.json and package-lock.json COPY package*.json ./ # Install dependencies RUN npm install # Copy the application source code into the container COPY . . # Build the application RUN npm run build # Stage 2: Create the final image FROM nginx:1.20 # Set the working directory within the container WORKDIR /app # Set ownership and permissions for nginx user RUN chown -R nginx:nginx /app && \ chmod -R 755 /app && \ chown -R nginx:nginx /var/cache/nginx && \ chown -R nginx:nginx /var/log/nginx && \ chown -R nginx:nginx /etc/nginx/conf.d # Create nginx user and set appropriate permissions RUN touch /var/run/nginx.pid && \ chown -R nginx:nginx /var/run/nginx.pid # Switch to the nginx user USER nginx # Copy the built application files from the builder stage to the nginx html directory COPY --from=builder /app/build /usr/share/nginx/html # Expose port 80 for the web server EXPOSE 80 # CMD to start nginx in the foreground CMD ["nginx", "-g", "daemon off;"] If we are using Node as the final base image (Figure 9), we can add USER node to our Dockerfile to run the application as a non-root user. The node user is created within the Node image with restricted permissions, unlike the root user, which has full control over the system. By default, the Docker Node image includes a non-root node user that you can use to avoid running your application container as root. Figure 9: Images tab in Docker Desktop. Limit capabilities Limiting Linux kernel capabilities is crucial for controlling the privileges available to containers. Docker, by default, runs with a restricted set of capabilities. You can enhance security by dropping unnecessary capabilities and adding only the ones required. docker run --cap-drop all --cap-add CHOWN node-app Let’s take our simple Hello World React containerized app and see how it can fit into the example practices for least privilege in Docker and integrate this application with least privilege practices: FROM node:21.6-alpine3.18 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 # Drop unnecessary capabilities CMD ["--cap-drop", "all", "--cap-add", "CHOWN", "npm", "start"] Add –no-new-privileges flag Running containers with the --security-opt=no-new-privileges flag is essential to prevent privilege escalation through setuid or setgid binaries. The setuid and setgid binaries allow users to run an executable with the file system permissions of the executable’s owner or group, respectively, and to change behavior in directories. This flag ensures that the container’s privileges cannot be escalated during runtime. docker run --security-opt=no-new-privileges node-app Disable inter-container communication Inter-container communication (icc) is enabled by default in Docker, allowing containers to communicate using the docker0 bridged network. docker0 bridges your container’s network (or any Compose networks) to the host’s main network interface, meaning your containers can access the network and you can access the containers. Disabling icc enhances security, requiring explicit communication definitions with --link options. docker run --icc=false node-app Use Linux Security Modules When you’re running applications in Docker containers, you want to make sure they’re as secure as possible. One way to do this is by using Linux Security Modules (LSMs), such as seccomp, AppArmor, or SELinux. These tools can provide additional layers of protection for Linux systems and containerized applications by controlling which actions a container can perform on the host system: Seccomp is a Linux kernel feature that allows a process to make a one-way transition into a “secure” state where it’s restricted to a reduced set of system calls. It restricts the system calls that a process can make, reducing its attack surface and potential impact if compromised. AppArmor confines individual programs to predefined rules, specifying their allowed behavior and limiting access to files and resources. SELinux enforces mandatory access control policies, defining rules for interactions between processes and system resources to mitigate the risk of privilege escalation and enforce least privilege principles. By leveraging these LSMs, administrators can enhance the security posture of their systems and applications, safeguarding against various threats and vulnerabilities. For instance, when considering a simple Hello World React application containerized within Docker, you may opt to employ the default seccomp profile unless overridden with the --security-opt option. This flexibility enables administrators to explicitly define security policies based on their specific requirements, as demonstrated in the following command: docker run --rm -it --security-opt seccomp=/path/to/seccomp/profile.json node-app Customize seccomp profiles Customizing seccomp profiles at runtime offers several benefits: Flexibility: By separating the seccomp configuration from the Dockerfile, you can adjust the security settings without modifying the image itself. This approach allows for easier experimentation and iteration. Granular control: Custom seccomp profiles let you precisely define which system calls are permitted or denied within your containers. This level of granularity allows you to tailor the security settings to the specific requirements of your application. Security compliance: In environments with strict security requirements, custom seccomp profiles can help ensure compliance by enforcing tighter restrictions on containerized processes. Limit container resources In Docker, containers are granted flexibility to consume CPU and RAM resources up to the extent allowed by the host kernel scheduler. While this flexibility facilitates efficient resource utilization, it also introduces potential risks: Security breaches: In the unfortunate event of a container compromise, attackers could exploit its unrestricted access to host resources for malicious activities. For instance, a compromised container could be exploited to mine cryptocurrency or execute other nefarious actions. Performance bottlenecks: Resource-intensive containers have the potential to monopolize system resources, leading to performance degradation or service outages across your applications. To mitigate these risks effectively, it’s crucial to establish clear resource limits for your containers: Allocate resources wisely: Assign specific amounts of CPU and RAM to each container to ensure fair distribution and prevent resource dominance. Enforce boundaries: Set hard limits that containers cannot exceed, effectively containing potential damage and thwarting resource exhaustion attacks. Promote harmony: Efficient resource management ensures stability, allowing containers to operate smoothly and fulfill their tasks without contention. For example, to limit CPU usage, you can run the container with: -docker run -it --cpus=".5" node-app This command limits the container to use only 50% of a single CPU core. Remember, setting resource limits isn’t just about efficiency — it’s a vital security measure that safeguards your host system and promotes harmony among your containerized applications. To prevent potential denial-of-service (DoS) attacks, limiting resources such as memory, CPU, file descriptors, and processes is crucial. Docker provides mechanisms to set these limits for individual containers. --restart=on-failure:<number_of_restarts> --ulimit nofile=<number> --ulimit nproc=<number> By diligently adhering to these least privilege principles, you can establish a robust security posture for your Docker containers. 6. Choose the right base image Finding the right image can seem daunting with more than 8.3 million repositories on Docker Hub. Two beacons can help guide you toward safe waters: Docker Official Images (DOI) and Docker Verified Publisher (DVP) badges. Docker Official Images (marked by a blue badge shield) offer a curated set of open source and drop-in solution repositories. These are your go-to for common bases like Ubuntu, Python, or Nginx. Imagine them as trusty ships, built with quality materials and regularly inspected for seaworthiness. Docker Verified Publisher Images (signified by a gold check mark) are like trusted partners, organizations who have teamed up with Docker to offer high-quality images. Docker verifies the authenticity and security of their content, giving you extra peace of mind. Think of them as sleek yachts, built by experienced shipwrights and certified by maritime authorities. Remember that Docker Official Images are a great starting point for common needs, and Verified Publisher images offer an extra layer of trust and security for crucial projects. Conclusion Optimizing Docker images for security involves a multifaceted approach, addressing image size, access controls, and vulnerability management. By understanding Docker image layering and leveraging practices such as choosing minimal base images and employing multi-stage builds, developers can significantly enhance efficiency and security. Running applications with least privileges, monitoring vulnerabilities with tools like Docker Scout, and implementing content trust further fortify the containerized ecosystem. As the Docker landscape evolves, staying informed about best practices and adopting proactive security measures is paramount. This guide serves as a valuable resource, empowering developers and operators to navigate the seas of Docker security with confidence and ensuring their applications are not only functional but also resilient to potential threats. Learn more Subscribe to the Docker Newsletter. Get the latest release of Docker Desktop. Get started with Docker Scout. Vote on what’s next! Check out our public roadmap. Have questions? The Docker community is here to help. New to Docker? Get started. View the full article
  23. Introduction Today customers want to reduce manual operations for deploying and maintaining their infrastructure. The recommended method to deploy and manage infrastructure on AWS is to follow Infrastructure-As-Code (IaC) model using tools like AWS CloudFormation, AWS Cloud Development Kit (AWS CDK) or Terraform. One of the critical components in terraform is managing the state file which keeps track of your configuration and resources. When you run terraform in an AWS CI/CD pipeline the state file has to be stored in a secured, common path to which the pipeline has access to. You need a mechanism to lock it when multiple developers in the team want to access it at the same time. In this blog post, we will explain how to manage terraform state files in AWS, best practices on configuring them in AWS and an example of how you can manage it efficiently in your Continuous Integration pipeline in AWS when used with AWS Developer Tools such as AWS CodeCommit and AWS CodeBuild. This blog post assumes you have a basic knowledge of terraform, AWS Developer Tools and AWS CI/CD pipeline. Let’s dive in! Challenges with handling state files By default, the state file is stored locally where terraform runs, which is not a problem if you are a single developer working on the deployment. However if not, it is not ideal to store state files locally as you may run into following problems: When working in teams or collaborative environments, multiple people need access to the state file Data in the state file is stored in plain text which may contain secrets or sensitive information Local files can get lost, corrupted, or deleted Best practices for handling state files The recommended practice for managing state files is to use terraform’s built-in support for remote backends. These are: Remote backend on Amazon Simple Storage Service (Amazon S3): You can configure terraform to store state files in an Amazon S3 bucket which provides a durable and scalable storage solution. Storing on Amazon S3 also enables collaboration that allows you to share state file with others. Remote backend on Amazon S3 with Amazon DynamoDB: In addition to using an Amazon S3 bucket for managing the files, you can use an Amazon DynamoDB table to lock the state file. This will allow only one person to modify a particular state file at any given time. It will help to avoid conflicts and enable safe concurrent access to the state file. There are other options available as well such as remote backend on terraform cloud and third party backends. Ultimately, the best method for managing terraform state files on AWS will depend on your specific requirements. When deploying terraform on AWS, the preferred choice of managing state is using Amazon S3 with Amazon DynamoDB. AWS configurations for managing state files Create an Amazon S3 bucket using terraform. Implement security measures for Amazon S3 bucket by creating an AWS Identity and Access Management (AWS IAM) policy or Amazon S3 Bucket Policy. Thus you can restrict access, configure object versioning for data protection and recovery, and enable AES256 encryption with SSE-KMS for encryption control. Next create an Amazon DynamoDB table using terraform with Primary key set to LockID. You can also set any additional configuration options such as read/write capacity units. Once the table is created, you will configure the terraform backend to use it for state locking by specifying the table name in the terraform block of your configuration. For a single AWS account with multiple environments and projects, you can use a single Amazon S3 bucket. If you have multiple applications in multiple environments across multiple AWS accounts, you can create one Amazon S3 bucket for each account. In that Amazon S3 bucket, you can create appropriate folders for each environment, storing project state files with specific prefixes. Now that you know how to handle terraform state files on AWS, let’s look at an example of how you can configure them in a Continuous Integration pipeline in AWS. Architecture Figure 1: Example architecture on how to use terraform in an AWS CI pipeline This diagram outlines the workflow implemented in this blog: The AWS CodeCommit repository contains the application code The AWS CodeBuild job contains the buildspec files and references the source code in AWS CodeCommit The AWS Lambda function contains the application code created after running terraform apply Amazon S3 contains the state file created after running terraform apply. Amazon DynamoDB locks the state file present in Amazon S3 Implementation Pre-requisites Before you begin, you must complete the following prerequisites: Install the latest version of AWS Command Line Interface (AWS CLI) Install terraform latest version Install latest Git version and setup git-remote-codecommit Use an existing AWS account or create a new one Use AWS IAM role with role profile, role permissions, role trust relationship and user permissions to access your AWS account via local terminal Setting up the environment You need an AWS access key ID and secret access key to configure AWS CLI. To learn more about configuring the AWS CLI, follow these instructions. Clone the repo for complete example: git clone https://github.com/aws-samples/manage-terraform-statefiles-in-aws-pipeline After cloning, you could see the following folder structure: Figure 2: AWS CodeCommit repository structure Let’s break down the terraform code into 2 parts – one for preparing the infrastructure and another for preparing the application. Preparing the Infrastructure The main.tf file is the core component that does below: It creates an Amazon S3 bucket to store the state file. We configure bucket ACL, bucket versioning and encryption so that the state file is secure. It creates an Amazon DynamoDB table which will be used to lock the state file. It creates two AWS CodeBuild projects, one for ‘terraform plan’ and another for ‘terraform apply’. Note – It also has the code block (commented out by default) to create AWS Lambda which you will use at a later stage. AWS CodeBuild projects should be able to access Amazon S3, Amazon DynamoDB, AWS CodeCommit and AWS Lambda. So, the AWS IAM role with appropriate permissions required to access these resources are created via iam.tf file. Next you will find two buildspec files named buildspec-plan.yaml and buildspec-apply.yaml that will execute terraform commands – terraform plan and terraform apply respectively. Modify AWS region in the provider.tf file. Update Amazon S3 bucket name, Amazon DynamoDB table name, AWS CodeBuild compute types, AWS Lambda role and policy names to required values using variable.tf file. You can also use this file to easily customize parameters for different environments. With this, the infrastructure setup is complete. You can use your local terminal and execute below commands in the same order to deploy the above-mentioned resources in your AWS account. terraform init terraform validate terraform plan terraform apply Once the apply is successful and all the above resources have been successfully deployed in your AWS account, proceed with deploying your application. Preparing the Application In the cloned repository, use the backend.tf file to create your own Amazon S3 backend to store the state file. By default, it will have below values. You can override them with your required values. bucket = "tfbackend-bucket" key = "terraform.tfstate" region = "eu-central-1" The repository has sample python code stored in main.py that returns a simple message when invoked. In the main.tf file, you can find the below block of code to create and deploy the Lambda function that uses the main.py code (uncomment these code blocks). data "archive_file" "lambda_archive_file" { …… } resource "aws_lambda_function" "lambda" { …… } Now you can deploy the application using AWS CodeBuild instead of running terraform commands locally which is the whole point and advantage of using AWS CodeBuild. Run the two AWS CodeBuild projects to execute terraform plan and terraform apply again. Once successful, you can verify your deployment by testing the code in AWS Lambda. To test a lambda function (console): Open AWS Lambda console and select your function “tf-codebuild” In the navigation pane, in Code section, click Test to create a test event Provide your required name, for example “test-lambda” Accept default values and click Save Click Test again to trigger your test event “test-lambda” It should return the sample message you provided in your main.py file. In the default case, it will display “Hello from AWS Lambda !” message as shown below. Figure 3: Sample Amazon Lambda function response To verify your state file, go to Amazon S3 console and select the backend bucket created (tfbackend-bucket). It will contain your state file. Figure 4: Amazon S3 bucket with terraform state file Open Amazon DynamoDB console and check your table tfstate-lock and it will have an entry with LockID. Figure 5: Amazon DynamoDB table with LockID Thus, you have securely stored and locked your terraform state file using terraform backend in a Continuous Integration pipeline. Cleanup To delete all the resources created as part of the repository, run the below command from your terminal. terraform destroy Conclusion In this blog post, we explored the fundamentals of terraform state files, discussed best practices for their secure storage within AWS environments and also mechanisms for locking these files to prevent unauthorized team access. And finally, we showed you an example of how efficiently you can manage them in a Continuous Integration pipeline in AWS. You can apply the same methodology to manage state files in a Continuous Delivery pipeline in AWS. For more information, see CI/CD pipeline on AWS, Terraform backends types, Purpose of terraform state. Arun Kumar Selvaraj Arun Kumar Selvaraj is a Cloud Infrastructure Architect with AWS Professional Services. He loves building world class capability that provides thought leadership, operating standards and platform to deliver accelerated migration and development paths for his customers. His interests include Migration, CCoE, IaC, Python, DevOps, Containers and Networking. Manasi Bhutada Manasi Bhutada is an ISV Solutions Architect based in the Netherlands. She helps customers design and implement well architected solutions in AWS that address their business problems. She is passionate about data analytics and networking. Beyond work she enjoys experimenting with food, playing pickleball, and diving into fun board games. View the full article
  24. Learn about passphrases and understand how you can use these strong yet memorable phrases to safeguard your accounts against hackers.View the full article
  25. In this TechRepublic exclusive, Forrester's CIO advises that the key to evaluating emerging technology is tying it to your organization's core business strategy.View the full article
  • Forum Statistics

    43.5k
    Total Topics
    43k
    Total Posts
×
×
  • Create New...