Jump to content

Search the Community

Showing results for tags 'github'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. Imagine arriving at a conference and immediately feeling inspired: your agenda is packed with must-see GitHub Copilot sessions, booths are filled with experts from top tech companies, and you’re surrounded by thousands of fellow developers and leaders who are eager to connect. That is the experience we’re curating for the 10th anniversary of our global developer event. This year, we’re going bigger and better with a stunning new venue as the foundation. We hope you’ll join us at the Fort Mason Center for Arts & Culture on the San Francisco Bay, from October 29-30, or virtually from anywhere in the world. As the world’s fair of software, GitHub Universe 2024 will be an unparalleled gathering of the brightest minds, companies, and innovators in the industry. With sessions diving into AI, the developer experience (DevEx), and security, attendees will have an opportunity to explore the latest products, best practices, and insights shaping the future of software development. Ready to be a part of this milestone event with us? In-person tickets are currently 35% off with our Super Early Bird discount, only from now until July 8. Get tickets Universe 2024: Where innovation meets fun, food, and connection We take your experience as a Universe attendee very seriously. From the moment you step through the colorful gates right down to the beverages we serve, our 10th anniversary event will blow your expectations out of the water. Spread across a sprawling 13-acre waterfront compound, Universe will unfold across seven buildings and various outdoor areas. With five stages hosting more than 100 sessions and 150 speakers, alongside a record-breaking 3,500 attendees (that’s over 50% more in-person attendees than last year!), this will be our biggest Universe yet. During breakfast and lunch, you’ll indulge in food trucks, snacks, and beverages—all included in the price of your in-person ticket. And don’t forget to explore the GitHub Shop for the latest Universe swag and join us for lively happy hours sponsored by our partners. Click to view slideshow. Everything you’ll learn at our global developer event Attending Universe is an investment in your business and your career. It’s easier than ever to be in charge of your growth with our beginner, intermediate, and advanced session topics curated to what developers and enterprises care about most. As an in-person attendee, you’ll also be able to take advantage of two ticket add-ons: GitHub Certification testing and workshops, available onsite! Take what you learn during your sessions and practice them IRL alongside your industry peers. You can secure your spot for workshops and certifications when you purchase your in-person ticket. Don’t miss out—these opportunities will go fast! If you’re interested in attending Universe as a speaker instead, now is your chance! The call for sessions (CFS) is now open. Learn about the super cool perks Universe speakers get and submit a session proposal by May 10 to be considered. (And yes, you’ll get a speaker honorarium to cover travel costs if selected!) Here’s a sneak peek of the themes we have in store. AI content track This track will delve into: The impact of AI on software development life cycles. Practical uses like automating pull requests and using AI code generation tools like GitHub Copilot for onboarding and productivity gains. Optimizing AI outputs, crafting AI policies, and fostering responsible AI deployment while evolving skill sets for success in the AI era. DevEx content track Learn about the following within this track: How the GitHub platform enhances platform engineering teams’ autonomy and efficiency. The significance of investing in developer experience for fostering innovation and efficiency within organizations. Strategies for effectively engaging with open source communities. Security content track Come away from this track with a better understanding of: Transforming application security with AI-powered vulnerability fixes. How to delegate the task of prioritizing and fixing security debt to AI. Leveraging open source to enhance code security while mitigating potential vulnerabilities. Will you celebrate 10 years of GitHub Universe with us? Whether you’re a leader interested in connecting with and learning from other industry executives, a manager hoping to propel your team’s productivity to new heights, or a developer looking to acquire new skills and further your career, Universe has something for you. Are you in? Get your in-person tickets 35% off while supplies last, or join us virtually for free!
  2. In March, we experienced two incidents that resulted in degraded performance across GitHub services. March 15 19:42 UTC (lasting 42 minutes) On March 15, GitHub experienced service degradation from 19:42 to 20:24 UTC due to a regression in the permissions system. This regression caused failures in GitHub Codespaces, GitHub Actions, and GitHub Pages. The problem stemmed from a framework upgrade that introduced MySQL query syntax that is incompatible with the database proxy service used in some production clusters. GitHub responded by rolling back the deployment and fixing a misconfiguration in development and CI environments to prevent similar issues in the future. March 11 22:45 UTC (lasting 2 hours and 3 minutes) On March 11, GitHub experienced service degradation from 22:45 to 00:48 UTC due to an inadvertent deployment of network configuration to the wrong environment. This led to intermittent errors in various services, including API requests, GitHub Copilot, GitHub secret scanning, and 2FA using GitHub Mobile. The issue was detected within 4 minutes, and a rollback was initiated immediately. The majority of impact was mitigated by 22:54 UTC. However, the rollback failed in one data center due to system-created configuration records missing a required field, causing 0.4% of requests to continue failing. Full rollback was successful after manual intervention to correct the configuration data, enabling full service restoration by 00:48 UTC. GitHub has implemented measures for safer configuration changes, such as prevention and automatic cleanup of obsolete configuration and faster issue detection, to prevent similar issues in the future. Please follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the GitHub Engineering Blog. The post GitHub Availability Report: March 2024 appeared first on The GitHub Blog. View the full article
  3. Learn Python through tutorials, blogs, books, project work, and exercises. Access all of it on GitHub for free and join a supportive open-source community.View the full article
  4. Just recently, I was coding a new feature for GitHub Copilot Chat. My task was to enable the chat to recognize a user’s project dependencies, allowing it to provide magical answers when the user poses a question. While I could have easily listed the project dependencies and considered the task complete, I knew that to extract top-notch responses from these large language models, I needed to be careful to not overload the prompt to avoid confusing the model by providing too much context. This meant pre-processing the dependency list and selecting the most relevant ones to include in the chat prompt. Creating machine-processable formats for the most prominent frameworks across various programming languages would have consumed days. It was during this time that I experienced one of those “Copilot moments.” I simply queried the chat in my IDE: Look at the data structure I have selected and create at least 10 examples that conform to the data structure. The data should cover the most prominent frameworks for the Go programming language. Voilà, there it was my initial batch of machine-processable dependencies. Just 30 minutes later, I had amassed a comprehensive collection of significant dependencies for nearly all supported languages, complete with parameterized unit tests. Completing a task that would likely have taken days without GitHub Copilot, in just 30 minutes, was truly remarkable. This led me to ponder: what other “Copilot moments” might my colleagues here at GitHub have experienced? Thus, here are a few ways we use GitHub Copilot at GitHub. 1. Semi-automating repetitive tasks Semi-automating repetitive tasks is a topic that resonates with a colleague of mine from another team. He mentions that they are tasked with developing and maintaining several live services, many of which utilize protocol buffers for data communication. During maintenance, they often encounter a situation where they need to increment ID numbers in the protobuf definitions, as illustrated in the code snippet below: protobuf google.protobuf.StringValue fetcher = 130 [(opts.cts_opt)={src:"Properties" key:"fetcher"}]; google.protobuf.StringValue proxy_enabled = 131 [(opts.cts_opt)={src:"Properties" key:"proxy_enabled"}]; google.protobuf.StringValue proxy_auth = 132 [(opts.cts_opt)={src:"Properties" key:"proxy_auth"}]; He particularly appreciates having GitHub Copilot completions in the editor for these tasks. It serves as a significant time saver, eliminating the need to manually generate ID numbers. Instead, one can simply tab through the completion suggestions until the task is complete. 2. Avoid getting side tracked Here’s another intriguing use case I heard about from a colleague. He needed to devise a regular expression to capture a Markdown code block and extract the language identifier. Fully immersed in his work, he preferred not to interrupt his flow by switching to chat, even though it could have provided a solution. Instead, he employed a creative approach by formalizing his task in a code comment: // The string above contains a code block with a language identifier. // Create a regexp that matches the code block and captures the language identifier. // Use tagged capture groups for the language and the code. This prompted GitHub Copilot to generate the regular expression as the subsequent statement in his editor: const re = /```(?<lang>\w+)(?<code>[\s\S]+?)```/; With the comment deleted, the task was swiftly accomplished! 3. Structuring data-related notes During a pleasant coffee chat, one of our support engineers shared an incident she experienced with a colleague last week. It was a Friday afternoon, and they were attempting to troubleshoot an issue for a specific customer. Eventually, they pinpointed the solution by creating various notes in VSCode. At GitHub, we prioritize remote collaboration. Thus, merely resolving the task wasn’t sufficient; it was also essential to inform our colleagues about the process to ensure the best possible experience for future customer requests. Consequently, even after completing this exhaustive task, they needed to document how they arrived at the solution. She initiated GitHub Copilot Chat and simply typed something along the lines of, “Organize my notes, structure them, and compile the data in the editor into Markdown tables.” Within seconds, the task was completed, allowing them to commence their well-deserved weekend. 4. Exploring and learning Enhancing and acquiring new skills are integral aspects of every engineer’s journey. John Berryman, a colleague of mine, undertook the challenge of leveraging GitHub Copilot to tackle a non-trivial coding task in a completely unfamiliar programming language. His goal was to delve into Rust, so on a Sunday, he embarked on this endeavor with the assistance of GitHub Copilot Chat. The task he set out to accomplish was to develop a program capable of converting any numerical input into its written English equivalent. While initially seeming straightforward, this task presented various complexities such as handling teen numbers, naming conventions for tens, placement of “and” in the output, and more. Twenty-three minutes and nine seconds later, he successfully produced a functional version written in Rust, despite having no prior experience with the language. Notably, he documented his entire process, recording himself throughout the endeavor. https://github.blog/wp-content/uploads/2024/04/rust_from_scratch_720-1.mp4 Berryman uses an older, experimental version of GitHub Copilot to write a program in Rust. Your very own GitHub Copilot moment I found it incredibly enlightening to discover how my fellow Hubbers utilize GitHub Copilot, and their innovative approaches inspired me to incorporate some of their ideas into my daily workflows. If you’re eager to explore GitHub Copilot firsthand, getting started is a breeze. Simply install it into your preferred editor and ask away. The post 4 ways GitHub engineers use GitHub Copilot appeared first on The GitHub Blog. View the full article
  5. We have a few new updates to announce for the work we have been doing to improve the Azure Boards + GitHub experience. Let’s jump right into it… Add link to GitHub commit or pull request (GA) After being several weeks in preview, we are excited to announce our new enhanced experience for linking work items to GitHub. You can now search and select the desired repository and drill down to find and link to the specific pull request or commit. No more need for multiple window changes and copy/paste (although you still have that option). This feature is only available in the New Boards Hub preview. GitHub connection improvements (private preview) For GitHub organizations that have thousands of repositories, connecting them to an Azure DevOps project has posed significant challenges. Previously, attempts to connect encountered timeout issues, preventing you from integrating GitHub with Azure Boards. Today we are announcing a preview that will unblock large GitHub organizations. You will now be able to search and select across thousands of repositories without the risk of timeout issues. We are happy to enable to this feature upon request. If you are interested, please send us your Azure DevOps organization name (dev.azure.com/{organization}). AB# links on GitHub pull request (private preview) As part of our ongoing enhancements to Azure Boards + GitHub integration, we’re introducing a private preview feature that enhances the experience with AB# links. With this update, your AB# links will now appear directly in the Development section of GitHub pull requests. This means you can view the linked work items without the need to navigate through the pull request description or comments, resulting in a more intuitive experience. Please note that these links will only be accessible if you use AB# in the pull request description. They will not appear if you link directly to the pull request from the work item in Azure DevOps. Removing the AB# link from the description will also remove it from the Development control. If you’re interested in participating in the preview, kindly reach out to us directly via email. Please include your GitHub organization name (https://github.com/{organization}). Summary We are excited to continue to bring these (and more coming) new Boards + GitHub integration features to customers. As always, we love it when folks can get early access and provide feedback. Please follow the links above to enroll and take advantage of these private previews. Click here to learn more about our Boards + GitHub integration roadmap. The post AB links on GitHub pull request and scale improvements for large organizations appeared first on Azure DevOps Blog. View the full article
  6. These GitHub repositories provide valuable resources for mastering computer science, including comprehensive roadmaps, free books and courses, tutorials, and hands-on coding exercises to help you gain the skills and knowledge necessary to thrive in the ever-evolving field of technology.View the full article
  7. Hello fellow readers! Have you ever wondered how the GitHub Security Lab performs security research? In this post, you’ll learn how we leverage GitHub products and features such as code scanning, CodeQL, Codespaces, and private vulnerability reporting. By the time we conclude, you’ll have mastered the art of swiftly configuring a clean, temporary environment for the discovery, verification, and disclosure of vulnerabilities in open source software (OSS). As you explore the contents of this post, you’ll notice we cover a wide array of GitHub tooling. If you have any feedback or questions, we encourage you to engage with our community discussions. Rest assured, this post is designed to be accessible to readers regardless of their prior familiarity with the tools we’ve mentioned. So, let’s embark on this journey together! Finding an interesting target The concept of an “interesting” target might have different meanings for each one of you based on the objective of your research. In order to find an “interesting” target, and also for this to be fun, you have to write down some filters first—unless you really want to dive into anything! From the language the project is written in, through the surface it unveils (is it an app? a framework?), every aspect is important to have a clear objective. Using GitHub Code Search Many times, we need to search widely for the use of a specific method or library. Either to get inspiration to use it, or pwn it , GitHub code search is there for us. We can use this feature to search across all public GitHub repositories with language, path, and regular expression filters! For instance, see this search query to find uses of readObject in Java files. For example, usually one of these aspects is the amount of people using the project (that is, the ones affected if a vulnerability occurred), which is provided by GitHub’s dependency network (for example, pytorch/pytorch), but it does not end there: we are also interested in how often the project is updated, the amount of stars, recent contributors, etc. Fortunately for us, some very smart people over at the Open Source Security Foundation (OpenSSF) already did some heavy work on this topic. OpenSSF Criticality Score The OpenSSF created the Open Source Project Criticality Score, which “defines the influence and importance of a project. It is a number between 0 (least-critical) and 1 (most-critical).” For further information on the specifics of the scoring algorithm, they can be found on the ossf/criticality_score repository or this post. A few months after the launch, Google collected information for the top 100k GitHub repositories and shared it in this spreadsheet. Within the GitHub Security Lab, we are continuously analyzing OSS projects with the goal of keeping the software ecosystem safe, focusing on high-profile projects we all depend on and rely on. In order to find the former, we base our target lists on the OpenSSF criticality score. The beginning of the process We published our Code Review of Frigate in which we exploited a deserialization of user-controlled data using PyYaml’s default Loader. It’s a great project to use as the running example in this blog post, given its >1.6 million downloads of Frigate container at the time of writing and the ease of the setup process. The original issue We won’t be finding new vulnerabilities in this blog post. Instead, we will use the deserialization of user-controlled data issue we reported to illustrate this post. Looking at the spreadsheet above, Frigate is listed at ~16k with a 0.45024 score, which is not yet deemed critical (>0.8), but not bad for almost two years ago! If you are curious and want to learn a bit more about calculating criticality scores, go ahead and calculate Frigate’s current score with ossf/criticality_score. Forking the project Once we have identified our target, let’s fork the repository either via GitHub’s UI or CLI. gh repo fork blakeblackshear/frigate --default-branch-only Once forked, let’s go back to the state in which we performed the audit: (sha=9185753322cc594b99509e9234c60647e70fae6f) Using GitHub’s API update a reference: gh api -X PATCH /repos/username/frigate/git/refs/heads/dev -F sha=9185753322cc594b99509e9234c60647e70fae6f -F force=true Or using git: git clone https://github.com/username/frigate cd frigate git checkout 9185753322cc594b99509e9234c60647e70fae6f git push origin HEAD:dev --force Now we are ready to continue! Code scanning and CodeQL Code scanning is GitHub’s solution to find, triage, and prioritize fixes for existing problems in your code. Code scanning alerts in the Security tab, provided by CodeQL Pull request alerts When code scanning is “connected” with a static analysis tool like GitHub’s CodeQL, that’s when the magic happens, but we will get there in a moment. CodeQL is the static code analysis engine developed by GitHub to automate security checks. CodeQL performs semantic and dataflow analysis, “letting you query code as though it were data.” CodeQL’s learning curve at the start can be a little bit steep, but absolutely worth the effort, as its dataflow libraries allow for a solution to any kind of situation. Learning CodeQL If you are interested in learning more about the world of static analysis, with exercises and more, go ahead and follow @sylwia-budzynska’s CodeQL zero to hero series. You may also want to join GitHub Security Lab’s Slack instance to hang out with CodeQL engineers and the community. Creating the CodeQL workflow file GitHub engineers are doing a fantastic job on making CodeQL analysis available in a one-click fashion. However, to learn what’s going on behind the scenes (because we are researchers ), we are going to do the manual setup. Running CodeQL at scale In this case, we are using CodeQL on a per-repository basis. If you are interested in running CodeQL at scale to hunt zero day vulnerabilities and their variants across repositories, feel free to learn more about Multi-repository Variant Analysis. In fact, the Security Lab has done some work to run CodeQL on more than 1k repositories at once! In order to create the workflow file, follow these steps: Visit your fork For security and simplicity reasons, we are going to remove the existing GitHub Actions workflows so we do not run unwanted workflows. To do so, we are going to use github.dev (GitHub’s web-based editor). For such code changes, that don’t require reviews, rebuild, or testing, simply browse to /.github/workflows, press the . (dot) key once and a VS Code editor will pop-up in your browser. And push the changes: Enable GitHub Actions (optional) Head to the GitHub Actions tab and click on “I understand my workflows, go ahead and enable them.”Note that this might not appear if you deleted all workflows previously. Head to the Security tab Click on “Code Scanning” Click “Configure scanning tool” In CodeQL analysis, click “Set up” and then click “Advanced” Now, you are guided to GitHub’s UI file editor with a custom workflow file (whose source is located at actions/starter-workflows) for the CodeQL Action. You can notice it is fully customized for this repository by looking at the on.push.branches and strategy.matrix.language values. Actions documentation If you are not familiar with GitHub Actions, refer to the documentation to understand the basics of a workflow. At first glance, we can see that there’s an analyze job that will run for each language defined in the workflow. The analyze job will: Clone the repository Initialize CodeQL In this step, github/codeql-action/init will download the latest release of CodeQL, or CodeQL packs, that are not available locally. Autobuild The autobuild step will try to automatically build the code present in the workspace (step 1) in order to populate a database for later analysis. If it’s not a compiled language, it will just succeed and continue. Analyze The CodeQL binary will be called to finalize the CodeQL database and run queries on it, which may take a few minutes. Advanced configuration using Security Lab’s Community QL Packs With CodeQL’s default configuration (default workflow), you will already find impactful issues. Our CodeQL team makes sure that these default queries are designed to have a very low false positive rate so that developers can confidently add them to their CI/CD pipeline. However, if you are a security team like the GitHub Security Lab, you may prefer using a different set of audit models and queries that have a low false negative rate, or community-powered models customized for your specific target or methodology. With that in mind, we recently published our CodeQL Community Packs, and using it is as easy as a one-liner in your workflow file. As the README outlines, we just need to add a packs variable in the Initialize CodeQL step: - name: Initialize CodeQL uses: github/codeql-action/init@v2 with: languages: ${{ matrix.language }} packs: githubsecuritylab/codeql-${{ matrix.language }}-queries Once done, we are ready to save the file and browse the results! For more information on customizing the scan configuration, refer to the documentation. The bit I find most interesting is Using a custom configuration file. Browsing alerts A few minutes in, the results are shown in the Security tab; let’s dig in! Available filters for the repository alerts Anatomy of a code scanning alert While you may think that running CodeQL locally would be easier, code scanning provides additional built-in mechanisms to avoid duplicated alerts, prioritize, or dismiss them. Also, the amount of information given by a single alert page can save you a lot of time! Code scanning alert for deserialization of user-controlled data found by CodeQL In a few seconds, this view answers a few questions: what, where, when, and how. Even though we can see a few lines surrounding the sink, we need to see the whole flow to determine whether we want to pursue the exploitation further. For that, click Show paths. Code scanning alert for deserialization of user-controlled data found by CodeQL In this view, we can see that the flow of the vulnerability begins from a user-controllable node (in CodeQL-fu, RemoteFlowSource), which flows without sanitizers to a known PyYaml’s sink. Digging into the alert Looking at the alert page and the flow paths alone isn’t enough information to guess whether this will be exploitable. While new_config is clearly something we could control, we don’t know the specifics of the Loader that yaml.load is using. A custom Loader can inherit quite a few kinds of Loaders, so we need to make sure that the inherited Loader allows for custom constructors. def load_config_with_no_duplicates(raw_config) -> dict: """Get config ensuring duplicate keys are not allowed.""" class PreserveDuplicatesLoader(yaml.loader.Loader): pass ... return yaml.load(raw_config, PreserveDuplicatesLoader) However, we know CodeQL uses dataflow for its queries, so it should already have checked the Loader type, right? The community helps CodeQL get better When we were writing the post about Frigate’s audit, we came across a new alert for the vulnerability we had just helped fix! Our fix suggestion was to change the Loader from yaml.loader.Loader to yaml.loader.SafeLoader, but it turns out that although CodeQL was accounting for a few known safe loaders, it was not accounting for classes inheriting these. Due to this, code scanning didn’t close the alert we reported. The world of security is huge and evolving everyday. That is, supporting every source, sanitizer, and sink that exists for each one of the queries is impossible. Security requires collaboration between developers and security experts, and we encourage everyone who uses CodeQL to collaborate in any of the following forms to bring back to the community: Report the False Positives in github/codeql: CodeQL engineers and members of the community are actively monitoring these. When we came across the false positive explained before, we opened github/codeql#14685. Suggest new models for the Security Lab’s CodeQL Community Packs: Whether you’re inclined to contribute by crafting a pull request introducing novel models or queries or by opening an Issue to share your model or query concepts, you are already having a huge impact on the research community. Furthermore, the repository is also monitored by CodeQL engineers, so your suggestion might make it to the main repository impacting a huge amount of users and enterprises. Your engagement is more impactful than you might think. CodeQL model editor If you are interested in learning about supporting new dependencies with CodeQL, please see the CodeQL model editor. The model editor is designed to help you model external dependencies of your codebase that are not supported by the standard CodeQL Libraries. Now that we are sure about the exploitability of the issue, we can move on to the exploitation phase. GitHub Codespaces Codespaces is GitHub’s solution for cloud, instant and customizable development environments based on Visual Studio Code. In this post, we will be using Codespaces as our exploitation environment due to its safe (isolated) and ephemeral nature, as we are one click away from creating and deleting a codespace. Although this feature has its own billing, we will be using the free 120 core hours per month. Creating a codespace I wasn’t kidding when I said “we are one click away from creating and deleting a codespace”—simply go to “Code” and click “Create codespace on dev.” Fortunately for us, Frigate maintainers have helpfully developed a custom devcontainer configuration for seamless integration with VSCode (and so, Codespaces). Customizing devcontainer configuration For more information about .devcontainer customization, refer to the documentation. Once loaded, I suggest you close the current browser tab and instead connect to the Codespaces using VSCode along with the Remote Explorer extension. With that set up, we have a fully integrated environment with built-in port forwarding. Set up for debugging and exploitation When performing security research, having a full setup ready for debugging can be a game changer. In most cases, exploiting the vulnerability requires analyzing how the application processes and reacts to your interactions, which can be impossible without debugging. Debugging Right after creating the codespace we can see that it failed: Build error Given that there is an extensive devcontainer configuration, we can guess that it was not made for Codespaces, but for a local VSCode installation not meant to be used in the cloud. Clicking “View Creation Log” helps us find out that Docker is trying to find a non-existing device: ERROR: for frigate-devcontainer - Cannot start service devcontainer: error gathering device information while adding custom device "/dev/bus/usb": no such file or directory We need to head to the docker-compose.yml file (/workspaces/frigate/docker-compose.yml) and comment the following out: The devices property The deploy property The /dev/bus/usb volume Afterwards, we go to /workspaces/frigate/.devcontainer/post_create.sh and remove lines 5-9. After the change, we can successfully rebuild the container: Rebuilding the container Once rebuilt, we can see 6 ports in the port forwarding section. However, Frigate API, the one we are targeting through nginx, is not active. To solve that, we can start debugging by heading to the “Run and Debug” (left) panel and click the green (play-like) button to start debugging Frigate. Exploitation The built-in port forwarding feature allows us to use network-related software like Burp Suite or Caido right from our native host, so we can send the following request: POST /api/config/save HTTP/1.1 Host: 127.0.0.1:53128 Content-Length: 50 !!python/object/apply:os.popen - touch /tmp/pwned Using the debugging setup, we can analyze how new_config flows to yaml.load and creates the /tmp/pwned file. Now that we have a valid exploit to prove the vulnerability, we are ready to report it to the project. Private vulnerability reporting Reporting vulnerabilities in open source projects has never been an easy subject for many reasons: finding a private way of communicating with maintainers, getting their reply, and agreeing on so-many topics that a vulnerability covers is quite challenging on a text-based channel. That is what private vulnerability reporting (PVR) solves: a single, private, interactive place in which security researchers and maintainers work together to make their software more secure, and their dependent consumers more aware. Closing the loop Published advisories resulting from private vulnerability reports can be included in the GitHub Advisory Database to automatically disclose your report to end users using Dependabot! Note that GitHub has chosen to introduce this feature in an opt-in manner, aligning with our developer-first philosophy. This approach grants project maintainers the autonomy to decide whether they wish to participate in this reporting experience. That said, tell your favorite maintainers to enable PVR! You can find inspiration in the issues we open when we can’t find a secure and private way of reporting a vulnerability. Sending the report Once we validated the vulnerability and built a proof of concept (PoC), we can use private vulnerability reporting to privately communicate with Frigate maintainers. This feature allows for special values like affected products, custom CVSS severity, linking a CWE and assigning credits with defined roles, ensuring precise documentation and proper recognition, crucial for a collaborative and effective security community. Once reported, it allows for both ends (reporter and maintainer) to collaborate on a chat, and code together in a temporary private fork. On the maintainer side, they are one click away from requesting a CVE, which generally takes just two days to get created. For more information on PVR, refer to the documentation. Example of a published report GitHub and security research In today’s tech-driven environment, GitHub serves as a valuable resource for security researchers. With tools such as code scanning, Codespaces, and private vulnerability reporting seamlessly integrated into the platform, researchers can effectively identify and address vulnerabilities end to end. This comprehensive strategy not only makes research easier but also enhances the global cybersecurity community. By offering a secure, collaborative, and efficient platform to spot and tackle potential threats, GitHub empowers both seasoned security professionals and aspiring researchers. It’s the go-to destination for boosting security and keeping up with the constantly changing threat landscape. Happy coding and research! GitHub Security Lab’s mission is to inspire and enable the community to secure the open source software we all depend on. Learn more about their work.
  8. Begin your MLOps journey with these comprehensive free resources available on GitHub.View the full article
  9. AI has become an integral part of my workflow these days, and with the assistance of GitHub Copilot, I move a lot faster when I’m building a project. Having used AI tools to increase my productivity over the past year, I’ve realized that similar to learning how to use a new framework or library, we can enhance our efficiency with AI tools by learning how to best use them. In this blog post, I’ll share some of the daily things I do to get the most out of GitHub Copilot. I hope these tips will help you become a more efficient and productive user of the AI assistant. Beyond code completion To make full use of the power of GitHub Copilot, it’s important to understand its capabilities. GitHub Copilot is developing rapidly, and new features are being added all the time. It’s no longer just a code completion tool in your editor—it now includes a chat interface that you can use in your IDE, a command line tool via a GitHub CLI extension, a summary tool in your pull requests, a helper tool in your terminals, and much, much more. In a recent blog post, I’ve listed some of the ways you didn’t know you could use GitHub Copilot. This will give you a great overview of how much the AI assistant can currently do. But beyond interacting with GitHub Copilot, how do you help it give you better answers? Well, the answer to that needs a bit more context. Context, context, context If you understand Large Language Models ( LLMs), you will know that they are designed to make predictions based on the context provided. This means, the more contextually rich our input or prompt is, the better the prediction or output will be. As such, learning to provide as much context as possible is key when interacting with GitHub Copilot, especially with the code completion feature. Unlike ChatGPT where you need to provide all the data to the model in the prompt window, by installing GitHub Copilot in your editor, the assistant is able to infer context from the code you’re working on. It then uses that context to provide code suggestions. We already know this, but what else can we do to give it additional context? I want to share a few essential tips with you to provide GitHub Copilot with more context in your editor to get the most relevant and useful code out of it: 1. Open your relevant files Having your files open provides GitHub Copilot with context. When you have additional files open, it will help to inform the suggestion that is returned. Remember, if a file is closed, GitHub Copilot cannot see the file’s content in your editor, which means it cannot get the context from those closed files. GitHub Copilot looks at the current open files in your editor to analyze the context, create a prompt that gets sent to the server, and return an appropriate suggestion. Have a few files open in your editor to give GitHub Copilot a bigger picture of your project. You can also use #editor in the chat interface to provide GitHub Copilot with additional context on your currently opened files in Visual Studio Code (VS Code) and Visual Studio. https://github.blog/wp-content/uploads/2024/03/01_editor_command_open_files.mp4 Remember to close unneeded files when context switching or moving on to the next task. 2. Provide a top-level comment Just as you would give a brief, high-level introduction to a coworker, a top-level comment in the file you’re working in can help GitHub Copilot understand the overall context of the pieces you will be creating—especially if you want your AI assistant to generate the boilerplate code for you to get going. Be sure to include details about what you need and provide a good description so it has as much information as possible. This will help to guide GitHub Copilot to give better suggestions, and give it a goal on what to work on. Having examples, especially when processing data or manipulation strings, helps quite a bit. 3. Set Includes and references It’s best to manually set the includes/imports or module references you need for your work, particularly if you’re working with a specific version of a package. GitHub Copilot will make suggestions, but you know what dependencies you want to use. This can also help to let GitHub Copilot know what frameworks, libraries, and their versions you’d like it to use when crafting suggestions. This can be helpful to jump start GitHub Copilot to a newer library version when it defaults to providing older code suggestions. https://github.blog/wp-content/uploads/2024/03/03_includes_references.mp4 4. Meaningful names matter The name of your variables and functions matter. If you have a function named foo or bar, GitHub Copilot will not be able to give you the best completion because it isn’t able to infer intent from the names. Just as the function name fetchData() won’t mean much to a coworker (or you after a few months), fetchData() won’t mean much to GitHub Copilot either. Implementing good coding practices will help you get the most value from GitHub Copilot. While GitHub Copilot helps you code and iterate faster, remember the old rule of programming still applies: garbage in, garbage out. 5. Provide specific and well- scoped function comments Commenting your code helps you get very specific, targeted suggestions. A function name can only be so descriptive without being overly long, so function comments can help fill in details that GitHub Copilot might need to know. One of the neat features about GitHub Copilot is that it can determine the correct comment syntax that is typically used in your programming language for function / method comments and will help create them for you based on what the code does. Adding more detail to these as the first change you do then helps GitHub Copilot determine what you would like to do in code and how to interact with that function. Remember: Single, specific, short comments help GitHub Copilot provide better context. https://github.blog/wp-content/uploads/2024/03/05_simple_specific_short.mp4 6. Provide sample code Providing sample code to GitHub Copilot will help it determine what you’re looking for. This helps to ground the model and provide it with even more context. It also helps GitHub Copilot generate suggestions that match the language and tasks you want to achieve, and return suggestions based on your current coding standards and practices. Unit tests provide one level of sample code at the individual function/method level, but you can also provide code examples in your project showing how to do things end to end. The cool thing about using GitHub Copilot long-term is that it nudges us to do a lot of the good coding practices we should’ve been doing all along. Learn more about providing context to GitHub Copilot by watching this Youtube video: Inline Chat with GitHub Copilot Inline chat Outside of providing enough context, there are some built-in features of GitHub Copilot that you may not be taking advantage of. Inline chat, for example, gives you an opportunity to almost chat with GitHub Copilot between your lines of code. By pressing CMD + I (CTRL + I on Windows) you’ll have Copilot right there to ask questions. This is a bit more convenient for quick fixes instead of opening up GitHub Copilot Chat’s side panel. https://github.blog/wp-content/uploads/2024/03/07_a_inline_chat_animated.mp4 This experience provides you with code diffs inline, which is awesome. There are also special slash commands available like creating documentation with just the slash of a button! Tips and tricks with GitHub Copilot Chat GitHub Copilot Chat provides an experience in your editor where you can have a conversation with the AI assistant. You can improve this experience by using built-in features to make the most out of it. 8. Remove irrelevant requests For example, did you know that you can delete a previously asked question in the chat interface to remove it from the indexed conversation? Especially if it is no longer relevant? Doing this will improve the flow of conversation and give GitHub Copilot only the necessary information needed to provide you with the best output. 9. Navigate through your conversation Another tip I found is to use the up and down arrows to navigate through your conversation with GitHub Copilot Chat. I found myself scrolling through the chat interface to find that last question I asked, then discovered I can just use my keyboard arrows just like in the terminal! https://github.blog/wp-content/uploads/2024/03/09_up_down_arrows_animated.mp4 10. Use the @workspace agent If you’re using VS Code or Visual Studio, remember that agents are available to help you go even further. The @workspace agent for example, is aware of your entire workspace and can answer questions related to it. As such, it can provide even more context when trying to get a good output from GitHub Copilot. https://github.blog/wp-content/uploads/2024/03/10_workspace_agent.mp4 11. Highlight relevant code Another great tip when using GitHub Copilot Chat is to highlight relevant code in your files before asking it questions. This will help to give targeted suggestions and just provides the assistant with more context into what you need help with. 12. Organize your conversations with threads You can have multiple ongoing conversations with GitHub Copilot Chat on different topics by isolating your conversations with threads. We’ve provided a convenient way for you to start new conversations (thread) by clicking the + sign on the chat interface. 13. Slash Commands for common tasks Slash commands are awesome, and there are quite a few of them. We have commands to help you explain code, fix code, create a new notebook, write tests, and many more. They are just shortcuts to common prompts that we’ve found to be particularly helpful in day-to-day development from our own internal usage. Command Description Usage /explain Get code explanations Open file with code or highlight code you want explained and type: /explain what is the fetchPrediction method? /fix Receive a proposed fix for the problems in the selected code Highlight problematic code and type: /fix propose a fix for the problems in fetchAirports route /tests Generate unit tests for selected code Open file with code or highlight code you want tests for and type: /tests /help Get help on using Copilot Chat Type: /help what can you do? /clear Clear current conversation Type: /clear /doc Add a documentation comment Highlight code and type: /doc You can also press CMD+I in your editor and type /doc/ inline /generate Generate code to answer your question Type: /generate code that validates a phone number /optimize Analyze and improve running time of the selected code Highlight code and type: /optimize fetchPrediction method /clear Clear current chat Type: /clear /new Scaffold code for a new workspace Type: /new create a new django app /simplify Simplify the selected code Highlight code and type: /simplify /feedback Provide feedback to the team Type: /feedback See the following image for commands available in VS Code: 14. Attach relevant files for reference In Visual Studio and VS Code, you can attach relevant files for GitHub Copilot Chat to reference by using #file. This scopes GitHub Copilot to a particular context in your code base and provides you with a much better outcome. To reference a file, type # in the comment box, choose #file and you will see a popup where you can choose your file. You can also type #file_name.py in the comment box. See below for an example: https://github.blog/wp-content/uploads/2024/03/14_attach_filename.mp4 15. Start with GitHub Copilot Chat for faster debugging These days whenever I need to debug some code, I turn to GitHub Copilot Chat first. Most recently, I was implementing a decision tree and performed a k-fold cross-validation. I kept getting the incorrect accuracy scores and couldn’t figure out why. I turned to GitHub Copilot Chat for some assistance and it turns out I wasn’t using my training data set (X_train, y_train), even though I thought I was: I'm catching up on my AI/ML studies today. I had to implement a DecisionTree and use the cross_val_score method to evaluate the model's accuracy score. I couldn't figure out why the incorrect values for the accuracy scores were being returned, so I turned to Chat for some help pic.twitter.com/xn2ctMjAnr — Kedasha is learning about AI + ML (@itsthatladydev) March 23, 2024 I figured this out a lot faster than I would’ve with external resources. I want to encourage you to start with GitHub Copilot Chat in your editor to get debugging help faster instead of going to external resources first. Follow my example above by explaining the problem, pasting the problematic code, and asking for help. You can also highlight the problematic code in your editor and use the /fix command in the chat interface. Be on the lookout for sparkles! In VS Code, you can quickly get help from GitHub Copilot by looking out for “magic sparkles.” For example, in the commit comment section, clicking the magic sparkles will help you generate a commit message with the help of AI. You can also find magic sparkles inline in your editor as you’re working for a quick way to access GitHub Copilot inline chat. https://github.blog/wp-content/uploads/2024/03/15_magic_sparkles.mp4 Pressing them will use AI to help you fill out the data and more magic sparkles are being added where we find other places for GitHub Copilot to help in your day-to-day coding experience. Know where your AI assistant shines To get the best and most out of the tool, remember that context and prompt crafting is essential to keep in mind. Understanding where the tool shines best is also important. Some of the things GitHub Copilot is very good at include boilerplate code and scaffolding, writing unit tests, writing documentation, pattern matching, explaining uncommon or confusing syntax, cron jobs, and regex, and helping you remember things you’ve forgotten and debugging. But never forget that you are in control, and GitHub Copilot is here as just that, your copilot. It is a tool that can help you write code faster, and it’s up to you to decide how to best use it. It is not here to do your work for you or to write everything for you. It will guide you and nudge you in the right direction just as a coworker would if you asked them questions or for guidance on a particular issue. I hope these tips and best practices were helpful. You can significantly improve your coding efficiency and output by properly leveraging GitHub Copilot. Learn more about how GitHub Copilot works by reading Inside GitHub: Working with the LLMs behind GitHub Copilot and Customizing and fine-tuning LLMs: What you need to know. Harness the power of GitHub Copilot. Learn more or get started now.
  10. GitHub, the number 1 hub for developers worldwide, has become an integral platform in modern software development. What started out as a tool for easier collaboration among a few coders has now grown into an ecosystem powering millions of projects spanning from open source to enterprise. However, despite its widespread adoption, understanding the inner workings of GitHub can be a challenge for newcomers; one question often arises - how does the innovative platform actually work under the hood? In this article about how GitHub works, we'll navigate through GitHub's core components, shedding light on its primary functions. From version control to issue tracking and collaboration tools, we'll explore how GitHub empowers software teams to streamline their workflows and enhance productivity. Key Takeaways GitHub plays a crucial role in modern software development, serving as a central hub for collaboration and code management across various project scales.Core tools like repositories, commits, branches, and pull requests are fundamental for effective version control and team collaboration on GitHub.GitHub's workflow fosters streamlined processes, transparency, and continuous improvement in software development.What is GitHub?GitHub is a web-based platform that provides developers with a centralized hub for storing, sharing, and collaborating on code repositories. Founded in 2008, GitHub offers powerful version control features powered by Git, allowing developers to track changes to their codebase over time. It facilitates collaboration through features such as pull requests, code reviews, and issue tracking, enabling teams to work together efficiently. Now that we've defined GitHub, let’s examine the fundamental tools and functionalities that enable it to enhance developer workflows. To learn more about Git, check out the blog How Git Works. GitHub’s Main Features and ToolsIn this section, we delve into the backbone of GitHub: its essential management features and collaborative tools. Core Version Control FeaturesBelow are the powerful version control tools that form the cornerstone of GitHub's functionality: Repositories: These serve as centralized storage for your codebase, documentation, and more. They are not only backed up and easily shareable but also come with comprehensive access controls to manage who can view or edit the project.Commits: Think of commits as milestones in your project's timeline. Each commit is a snapshot of your project at a specific point, capturing changes and allowing you to trace the evolution of your code over time.Branches: Branches exist in parallel with the main project, providing a safe space for developers to work on new features or fix bugs without directly modifying the main code. Changes made in branches do not affect the main project code until they are merged.Pull Requests: The bridge for collaboration, pull requests enable team members to propose, discuss, and review changes from branches before they are merged into the main codebase. This fosters a culture of peer review and collective improvement.To learn more about Git Pull, check out this article: How to Force Git Pull to Overwrite Local Files? GitHub’s Tools for Team CollaborationBeyond code management functionality, GitHub also has the following integrated tools for improved team workflows: Issues: GitHub issues act as a versatile platform for tracking bugs, tasks, and enhancements. They link directly to the code, making it easier to tie discussions to specific project elements.Wikis: Every GitHub repository can have its wiki, allowing teams to create and maintain comprehensive documentation in a centralized location. This ensures that information remains accessible and up to date.Graphs and Pulse: These visualization tools offer insights into a project's activity, such as contribution patterns and progress over time. They provide a high-level overview, aiding in project management and team coordination.GitHub Pages: This feature lets users publish websites directly from their repositories. Ideal for project documentation, personal portfolios, or project showcases, GitHub Pages simplifies the process of taking your project to the web.The GitHub WorkflowGitHub workflow The GitHub workflow revolves around modifying a codebase housed in a repository, recording those modifications, and then proposing and integrating them into the main project. The typical process follows this general workflow: Set Up a RepositoryTo begin, developers create a new repository from scratch or fork an existing repository they have access to. Forking creates a copy of the project under the developer's GitHub account while cloning downloads the repository to the developer's local machine for editing. Make Changes and CommitWith the repository files available locally, developers can make necessary edits to the code, add or remove files as needed, and more. As changes are made, developers stage them for inclusion in the next commit. Each commit functions as a snapshot recording changes, accompanied by a commit message explaining the what and why of the alterations. Commits capture an evolutionary trail of incremental additions. Push Changes to GitHubThus far, changes exist only on the local clone. To synchronize with the remote repository on GitHub, developers use the git push command to upload recent commits. This ensures an up-to-date development version is centrally available for team members to access. Open Pull RequestsHere, developers propose that commits be merged back into the canonical project via pull requests. By opening pull requests, they request administrators or teammates to review the changes, provide feedback, approve, and finally integrate the commits. This vital checkpoint ensures the quality of the code. Repeat for New ChangesThe cycle repeats as developers create additional features, bug fixes, and more. Continued iteration with decentralized contributions, simultaneously merging, eventually shapes the software architecture. Collaborating on GitHubGitHub provides a suite of features for organizing collaboration around projects in productive and transparent ways: Issues and Project BoardsIssues provide threaded discussions centered on ideas, enhancements, bugs, or broader task management. Project boards visually track issues as cards sorted into progress columns. This gives a high-level view of the work remaining. Issues facilitate coordination from problem-solving to planning. Pull Requests and Code ReviewsAs outlined in the workflow, pull requests allow for proposed commits to be reviewed before integration. Teammates can provide feedback on code quality, suggest improvements, approve changes, and monitor progress through this crucial process before merging to ensure consistency. Organizations and TeamsFor larger-scale collaboration, organizations contain multiple repositories under one entity. Owners can manage permissions for members and divide them into teams with custom access levels to individual repositories or coding resources. Wikis and GitHub PagesTo organize institutional knowledge, wikis document processes, guidelines, meeting notes, etc., in one central location tied to the appropriate repositories. GitHub Pages enable effortless publishing of website resources related to projects. Notifications and Social FeaturesNotifications alert contributors to relevant activity like issue assignments, PR updates, comments requiring a response, etc. A news feed provides updates across the repositories someone follows. Social aspects streamline awareness. In these ways, GitHub provides robust tools to connect distributed teams working in tandem. Additional GitHub Capabilities Beyond its core features, GitHub continues to expand its ecosystem with the following specialized capabilities: GitHub Actions: GitHub Actions provides infrastructure for automating custom software workflows directly integrated with the repository. For example, developers can set up trigger events to run preset tasks like testing, building, and deploying code without manual intervention. Actions streamline DevOps pipelines.GitHub Packages: GitHub Packages allow users to store and distribute other software assets like Ruby gems or Docker containers. Teams can share these packages privately or publicly, just like coding repositories.GitHub Sponsors: The GitHub Sponsors program enables funding support for open-source project developers. Organizations or individual users can sponsor contributors financially to empower sustainable open-source maintenance from the community.Code Scanning: GitHub has integrated code scanning, which automatically scans code for security vulnerabilities and coding errors. This flags issues like credentials leak early on to prevent repositories compromises. Code scanning integrates with GitHub Actions so scans can be automatically triggered. ConclusionIn essence, GitHub has democratized software development, empowering developers to collaborate seamlessly, share knowledge, and collectively build better software. Its impact on the industry cannot be overstated, and it will likely continue to shape the future of software development practices for years to come. Enroll in our Git for Beginners course to learn and practice more Git concepts. View the full article
  11. This is abridged content from October 2023’s Insider newsletter. Like what you see? Sign up for the newsletter to receive complete, unabridged content in your inbox twice a month. Sign up now > Are you ready to unlock the secrets of organization, collaboration, and project magic? Buckle up, because we’ve got a handful of GitHub Projects tips and tricks that will turn you into a project management wizard! Keep reading for list of things you can do with GitHub Projects: 1. Manage your projects from the command line Some folks prefer to work in the terminal, and with the GitHub CLI ​​project command, you can manage and automate workflows from the command line. For example, you can create a new project board for your repository with a command like gh repo create-project. Then, you can add issues to this board using the gh issue create command, making it easy to manage and track your project’s progress from the command line. 2. Create reusable project templates If you often find yourself recreating projects with similar content and structure, you can set a project as a template when creating new projects. To set your project as a template, navigate to the project “Settings” page, and under the “Templates” section, toggle on Make template. This will turn the project into a template that can be used with the green Use this template button at the top of your project or when creating a new project. 3. Add issues from any organization If you’re an open source maintainer or a developer with multiple clients, you may be working across various organizations at a time. This also means you have multiple issues to keep track of, and GitHub Projects can help you collate issues from any organization onto a single project. You can do this in one of two ways: Copy the issue link from the organization and paste it into the project. Search for the organization and repository from the project using # and select the issues you want to add. 4. Edit multiple items at once Rather than spending time manually updating individual items, you can edit multiple items at once with the bulk editing feature. Let’s say you wanted to assign multiple issues to yourself. On the table layout, assign one issue, highlight and copy the contents of the cell, then select the remaining items you want to be assigned and paste the copied contents. And there you have it: you just assigned yourself to multiple issues at once. Check out this GIF for a visual representation: Want even more tips and tricks? Check out this blog post for 10 more GitHub Projects tips, or learn how we use GitHub Projects to standardize our workflows and stay aligned. You’re now equipped to work your magic with GitHub Projects! Want to receive content like this twice a month, right in your inbox? Sign up for the newsletter now >
  12. Starting today, code scanning autofix will be available in public beta for all GitHub Advanced Security customers. Powered by GitHub Copilot and CodeQL, code scanning autofix covers more than 90% of alert types in JavaScript, Typescript, Java, and Python, and delivers code suggestions shown to remediate more than two-thirds of found vulnerabilities with little or no editing. Found means fixed Our vision for application security is an environment where found means fixed. By prioritizing the developer experience in GitHub Advanced Security, we already help teams remediate 7x faster than traditional security tools. Code scanning autofix is the next leap forward, helping developers dramatically reduce time and effort spent on remediation. Even though applications remain a leading attack vector, most organizations admit to an ever-growing number of unremediated vulnerabilities that exist in production repositories. Code scanning autofix helps organizations slow the growth of this “application security debt” by making it easier for developers to fix vulnerabilities as they code. Just as GitHub Copilot relieves developers of tedious and repetitive tasks, code scanning autofix will help development teams reclaim time formerly spent on remediation. Security teams will also benefit from a reduced volume of everyday vulnerabilities, so they can focus on strategies to protect the business while keeping up with an accelerated pace of development. Want to try code scanning autofix? If your organization is new to GitHub or does not yet have GitHub Advanced Security (or, its prerequisite, GitHub Enterprise), contact us to request a demo and set up a free trial. How it works When a vulnerability is discovered in a supported language, fix suggestions will include a natural language explanation of the suggested fix, together with a preview of the code suggestion that the developer can accept, edit, or dismiss. In addition to changes to the current file, these code suggestions can include changes to multiple files and the dependencies that should be added to the project. Want to learn more about how we do it? Read Fixing security vulnerabilities with AI: A peek under the hood of code scanning autofix. Behind the scenes, code scanning autofix leverages the CodeQL engine and a combination of heuristics and GitHub Copilot APIs to generate code suggestions. To learn more about autofix and its data sources, capabilities, and limitations, please see About autofix for CodeQL code scanning. What’s next? We’ll continue to add support for more languages, with C# and Go coming next. We also encourage you to join the autofix feedback and resources discussion to share your experiences and help guide further improvements to the autofix experience. Together, we can help move application security closer to a place where a vulnerability found means a vulnerability fixed. Resources To help you learn more, GitHub has published extensive resources and documentation about the system architecture, data flow, and AI policies governing code scanning autofix. Changelog: Code scanning now suggests AI-powered autofixes for CodeQL alerts in pull request (beta) Engineering blog: Fixing security vulnerabilities with AI Documentation: About autofix for CodeQL code scanning Discussion: Autofix feedback and resources If you want to give code scanning autofix a try, but your organization is new to GitHub or does not yet have GitHub Advanced Security (or, its prerequisite, GitHub Enterprise), contact us to request a demo and set up a free trial.
  13. Learn how to automate machine learning training and evaluation using scikit-learn pipelines, GitHub Actions, and CML.View the full article
  14. Millions of secrets and authentication keys were leaked on GitHub in 2023, with the majority of developers not caring to revoke them even after being notified of the mishap, new research has claimed. A report from GitGuardian, a project that helps developers secure their software development with automated secrets detection and remediation, claims that in 2023, GitHub users accidentally exposed 12.8 million secrets in more than 3 million public repositories. These secrets include account passwords, API keys, TLS/SSL certificates, encryption keys, cloud service credentials, OAuth tokens, and similar. Slow response During the development stage, many IT pros would hardcode different authentication secrets in order to make their lives easier. However, they often forget to remove the secrets before publishing the code on GitHub. Thus, should any malicious actors discover these secrets, they would get easy access to private resources and services, which can result in data breaches and similar incidents. India was the country from which most leaks originated, followed by the United States, Brazil, China, France, and Canada. The vast majority of the leaks came from the IT industry (65.9%), followed by education (20.1%). The remaining 14% was split between science, retail, manufacturing, finance, public administration, healthcare, entertainment, and transport. Making a mistake and hardcoding secrets can happen to anyone - but what happens after is perhaps even more worrying. Just 2.6% of the secrets are revoked within the hour - practically everything else (91.6%) remains valid even after five days, when GitGuardian stops tracking their status. To make matters worse, the project sent 1.8 million emails to different developers and companies, warning them of its findings, and just 1.8% responded by removing the secrets from the code. Riot Games, GitHub, OpenAI, and AWS were listed as companies with the best response mechanisms. Via BleepingComputer More from TechRadar Pro GitHub's secret scanning feature is now even more powerful, covering AWS, Google, Microsoft, and moreHere's a list of the best firewalls around todayThese are the best endpoint security tools right now View the full article
  15. In February, we experienced two incidents that resulted in degraded performance across GitHub services. February 26 18:34 UTC (lasting 53 minutes) February 29 09:32 UTC (lasting 142 minutes) On February 26 and February 29, we had two incidents related to a background job service that caused processing delays to GitHub services. The incident on February 26 lasted for 63 minutes, while the incident on February 28 lasted for 142 minutes. The incident on February 26 was related to capacity constraints with our job queuing service and a failure of our automated failover system. Users experienced delays in Webhooks, GitHub Actions, and UI updates (for example, a delay in UI updates on pull requests). We mitigated the incident by manually failing over to our secondary cluster. No data was lost in the process. The incident on February 29 also caused processing delays to Webhooks, GitHub Actions and GitHub Issues services, with 95% of the delays occurring in a 22-minute window between 11:05 and 11:27 UTC. At 9:32 UTC, our automated failover successfully routed traffic, but an improper restoration to the primary at 10:32 UTC caused a significant increase in queued jobs until a correction was made at 11:21 UTC and healthy services began burning down the backlog until full restoration at 11:27 UTC. To prevent recurrence of the incidents in the short term, we have completed three significant improvements in the areas of better automation, increasing the reliability of our fallback process, and expanding the capacity of our background job queuing services based on these two incidents. For the longer term, we have a more significant effort already in progress to improve the overall scalability and reliability of our job processing platform. Please follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the GitHub Engineering Blog. The post GitHub Availability Report: February 2024 appeared first on The GitHub Blog. View the full article
  16. At GitHub, we use merge queue to merge hundreds of pull requests every day. Developing this feature and rolling it out internally did not happen overnight, but the journey was worth it—both because of how it has transformed the way we deploy changes to production at scale, but also how it has helped improve the velocity of customers too. Let’s take a look at how this feature was developed and how you can use it, too. Merge queue is generally available and is also now available on GitHub Enterprise Server! Find out more. Why we needed merge queue In 2020, engineers from across GitHub came together with a goal: improve the process for deploying and merging pull requests across the GitHub service, and specifically within our largest monorepo. This process was becoming overly complex to manage, required special GitHub-only logic in the codebase, and required developers to learn external tools, which meant the engineers developing for GitHub weren’t actually using GitHub in the same way as our customers. To understand how we got to this point in 2020, it’s important to look even further back. By 2016, nearly 1,000 pull requests were merging into our large monorepo every month. GitHub was growing both in the number of services deployed and in the number of changes shipping to those services. And because we deploy changes prior to merging them, we needed a more efficient way to group and deploy multiple pull requests at the same time. Our solution at this time was trains. A train was a special pull request that grouped together multiple pull requests (passengers) that would be tested, deployed, and eventually merged at the same time. A user (called a conductor) was responsible for handling most aspects of the process, such as starting a deployment of the train and handling conflicts that arose. Pipelines were added to help manage the rollout path. Both these systems (trains and pipelines) were only used on our largest monorepo and were implemented in our internal deployment system. Trains helped improve velocity at first, but over time started to negatively impact developer satisfaction and increase the time to land a pull request. Our internal Developer Experience (DX) team regularly polls our developers to learn about pain points to help inform where to invest in improvements. These surveys consistently rated deployment as the most painful part of the developer’s daily experience, highlighting the complexity and friction involved with building and shepherding trains in particular. This qualitative data was backed by our quantitative metrics. These showed a steady increase in the time it took from pull request to shipped code. Trains could also grow large, containing the changes of 15 pull requests. Large trains frequently “derailed” due to a deployment issue, conflicts, or the need for an engineer to remove their change. On painful occasions, developers could wait 8+ hours after joining a train for it to ship, only for it to be removed due to a conflict between two pull requests in the train. Trains were also not used on every repository, meaning the developer experience varied significantly between different services. This led to confusion when engineers moved between services or contributed to services they didn’t own, which is fairly frequent due to our inner source model. In short, our process was significantly impacting the productivity of our engineering teams—both in our large monorepo and service repositories. Building a better solution for us and eventually for customers By 2020, it was clear that our internal tools and processes for deploying and merging across our repositories were limiting our ability to land pull requests as often as we needed. Beyond just improving velocity, it became clear that our new solution needed to: Improve the developer experience of shipping. Engineers wanted to express two simple intents: “I want to ship this change” and “I want to shift to other work;” the system should handle the rest. Avoid having problematic pull requests impact everyone. Those causing conflicts or build failures should not impact all other pull requests waiting to merge. The throughput of the overall system should be favored over fairness to an individual pull request. Be consistent and as automated as possible across our services and repositories. Manual toil by engineers should be removed wherever possible. The merge queue project began as part of an overall effort within GitHub to improve availability and remove friction that was preventing developers from shipping at the frequency and level of quality that was needed. Initially, it was only focused on providing a solution for us, but was built with the expectation that it would eventually be made available to customers. By mid-2021, a few small, internal repositories started testing merge queue, but moving our large monorepo would not happen until the next year for a few reasons. For one, we could not stop deploying for days or weeks in order to swap systems. At every stage of the project we had to have a working system to ship changes. At a maximum, we could block deployments for an hour or so to run a test or transition. GitHub is remote-first and we have engineers throughout the world, so there are quieter times but never a free pass to take the system offline. Changing the way thousands of developers deploy and merge changes also requires lots of communication to ensure teams are able to maintain velocity throughout the transition. Training 1,000 engineers on a new system overnight is difficult, to say the least. By rolling out changes to the process in phases (and sometimes testing and rolling back changes early in the morning before most developers started working) we were able to slowly transition our large monorepo and all of our repositories responsible for production services onto merge queue by 2023. How we use merge queue today Merge queue has become the single entry point for shipping code changes at GitHub. It was designed and tested at scale, shipping 30,000+ pull requests with their associated 4.5 million CI runs, for GitHub.com before merge queue was made generally available. For GitHub and our “deploy the merge process,” merge queue dynamically forms groups of pull requests that are candidates for deployment, kicks off builds and tests via GitHub Actions, and ensures our main branch is never updated to a failing commit by enforcing branch protection rules. Pull requests in the queue that conflict with one another are automatically detected and removed, with the queue automatically re-forming groups as needed. Because merge queue is integrated into the pull request workflow (and does not require knowledge of special ChatOps commands, or use of labels or special syntax in comments to manage state), our developer experience is also greatly improved. Developers can add their pull request to the queue and, if they spot an issue with their change, leave the queue with a single click. We can now ship larger groups without the pitfalls and frictions of trains. Trains (our old system) previously limited our ability to deploy more than 15 changes at once, but now we can now safely deploy 30 or more if needed. Every month, over 500 engineers merge 2,500 pull requests into our large monorepo with merge queue, more than double the volume from a few years ago. The average wait time to ship a change has also been reduced by 33%. And it’s not just numbers that have improved. On one of our periodic developer satisfaction surveys, an engineer called merge queue “one of the best quality-of-life improvements to shipping changes that I’ve seen a GitHub!” It’s not a stretch to say that merge queue has transformed the way GitHub deploys changes to production at scale. How to get started Merge queue is available to public repositories on GitHub.com owned by organizations and to all repositories on GitHub Enterprise (Cloud or Server). To learn more about merge queue and how it can help velocity and developer satisfaction on your busiest repositories, see our blog post, GitHub merge queue is generally available. Interested in joining GitHub? Check out our open positions or learn more about our platform.
  17. GitHub Enterprise Server 3.12 is now generally available. With this version, customers can choose how to best scale their security strategy, gain more control over deployments, and so much more. Highlights of this version include: Restrict your deployment rollouts to select tag patterns in GitHub Actions Environments. Enforce which GitHub Actions workflows must pass with organization-wide repository rulesets. Automate pull request merges using merge queues, automating the process of validating and merging pull requests into a busy branch, ensuring the branch is never broken, reducing time to merge, and freeing up developers to work on their next tasks. Scale your security strategy with Dependabot alert rules. This public beta allows customers to choose how to respond to Dependabot alerts automatically by setting up custom auto-triage rules in their repository or organization. Enhance the security of your code with a public beta of secret scanning for non-provider patterns, and an update to code scanning’s default setup to support all CodeQL languages. GitHub Project templates are generally available at the organization level, allowing customers to share out and learn best practices in how to set up and use projects to plan and track their work. Updated global navigation to make using and finding information simpler, as well as improve accessibility and performance. Highlight text in markdown files with the alerts markdown extension, which provides five levels to use (note, tip, important, warning, and caution). Download GitHub Enterprise Server 3.12 now. For help upgrading, use the Upgrade Assistant to find the upgrade path from your current version of GitHub Enterprise Server (GHES) to this new version. More GitHub Actions features ensures your code is secure, correct, and compliant before you deploy Enjoy more control over your deployments by configuring tag patterns Using environments in GitHub Actions lets you configure your deployment environments with protection rules and secrets in order to better ensure secure deployments. As of today, tag patterns are now generally available. This capability makes it easy to specify selected tags or tag patterns on your protected environments in order to add an additional layer of security and control to your deployments. For example, you can now define that only “Releases/*” tags can be deployed to your production environment. Learn more about securing environments using deployment protection rules. Required workflows with repository rulesets are now generally available This feature makes it easy for teams to define and enforce standard CI/CD practices in the form of rulesets across multiple repositories within their organization without needing to configure individual repositories. For anyone using the legacy required workflows feature, your workflows will be automatically migrated to rulesets. With rulesets, it’s easier than ever for organizations to ensure their team’s code is secure, compliant, and correct before being deployed to production. Check out our documentation to learn more about requiring workflows with rulesets. Bringing automation to merge queue for more efficient collaboration Automate branch management Collaborative coding is essential for team productivity, but requires efficient branch management to avoid frustration and maintain velocity. Automated branch management, like merge queue, streamlines this process by ensuring compatibility, alerts developers of any issues, and allows teams to focus on coding without interruptions. With merge queue available in GHES, enterprises have a central platform for collaboration and the integrated tools for enterprise-level development. Simplify your pull request process by using merge queues today. Using GitHub Advanced Security to scale and enhance your security strategy Scale your security strategy with Dependabot alert rules With Dependabot, you can proactively manage security alerts to ensure high-priority items are surfaced. With user-configured alert rules, you can now tailor your security strategy to your specific risk tolerance and contextual needs, streamlining alert triage and remediation processes. GitHub offers suggested rulesets curated for all users, automatically filtering out false positives for public repositories and suggestions for private ones. Dependabot’s rules engine empowers developers to automatically manage alerts, from auto-dismissing to reopening based on customizable criteria. Stay ahead of vulnerabilities with Dependabot, supported by GitHub’s continuously improved vulnerability patterns. CodeQL supported languages can be set up automatically With this update, code scanning default setup will change how languages are analyzed in repositories. No longer will repositories need to manually select compiled languages for inclusion in the default setup configuration. Instead, the system will automatically attempt to analyze all CodeQL supported languages. The “edit configuration” page allows users to see which languages are included in each configuration and apply any customization that may be required. This feature will be available at both the repository and organization levels, guaranteeing the best setup for your repository. Expanded protection beyond patterns Secret scanning goes beyond provider patterns to detect critical security vulnerabilities like HTTP authentication headers, database connection strings, and private keys. Simply enable the “Scan for non-provider patterns” option in your repository or organization’s security settings to increase your defenses. With detected secrets conveniently categorized under a new “Other” tab on the alert list, you can ensure thorough protection for your most sensitive information. Stay ahead of threats and safeguard your data with our comprehensive secret scanning capabilities. New productivity enhancements to keep teams in the flow Make what needs to be noticed stand out Markdown serves as a fundamental tool. It is used for documentation, notes, comments, and decision records. GitHub is now taking it one step further with the addition of a Markdown extension to highlight text, signaling that certain information has different meaning than another. Searching is easier and more efficient We’ve introduced the redesigned global navigation for GitHub.com, featuring a suite of enhancements tailored to elevate user experience and efficiency. Our latest updates to GHES aim to streamline navigation, enhance accessibility, and boost performance. With improved wayfinding through breadcrumbs and easy access to essential repositories and teams from any location, navigating GitHub has never been more seamless. Create templates to simply project management Our latest feature update on GitHub Projects is designed to enhance project management streamlining project creation and foster collaboration within teams. With these updates, you can now swiftly create, share, and utilize project templates within your organizations, simplifying the process of starting new projects. Try it today To learn more about GitHub Enterprise Server 3.12, read the release notes or download it now. Not using GHES already? Start a free trial to innovate faster with the developer experience platform companies know and love.
  18. Implementing Continuous Integration/Continuous Deployment (CI/CD) for a Python application using Django involves several steps to automate testing and deployment processes. This guide will walk you through setting up a basic CI/CD pipeline using GitHub Actions, a popular CI/CD tool that integrates seamlessly with GitHub repositories. Step 1: Setting up Your Django Project Ensure your Django project is in a Git repository hosted on GitHub. This repository will be the basis for setting up your CI/CD pipeline. View the full article
  19. Companies and their structures are always evolving. Regardless of the reason, with people and information exchanging places, it’s easy for maintainership/ownership information about a repository to become outdated or unclear. Maintainers play a crucial role in guiding and stewarding a project, and knowing who they are is essential for efficient collaboration and decision-making. This information can be stored in the CODEOWNERS file but how can we ensure that it’s up to date? Let’s delve into why this matters and how the GitHub OSPO’s tool, cleanowners, can help maintainers achieve accurate ownership information for their projects. The importance of accurate maintainer information In any software project, having clear ownership guidelines is crucial for effective collaboration. Maintainers are responsible for reviewing contributions, merging changes, and guiding the project’s direction. Without clear ownership information, contributors may be unsure of who to reach out to for guidance or review. Imagine that you’ve discovered a high-risk security vulnerability and nobody is responding to your pull request to fix it, let alone coordinating that everyone across the company gets the patches needed for fixing it. This ambiguity can lead to delays and confusion, unfortunately teaching teams that it’s better to maintain control than to collaborate. These are not the outcomes we are hoping for as developers, so it’s important for us to consider how we can ensure active maintainership especially of our production components. CODEOWNERS files Solving this problem starts with documenting maintainers. A CODEOWNERS file, residing in the root of a repository, allows maintainers to specify individuals or teams who are responsible for reviewing and maintaining specific areas of the codebase. By defining ownership at the file or directory level, CODEOWNERS provides clarity on who is responsible for reviewing changes within each part of the project. CODEOWNERS not only streamlines the contribution process but also fosters transparency and accountability within the organization. Contributors know exactly who to contact for feedback, escalation, or approval, while maintainers can effectively distribute responsibilities and ensure that every part of the codebase has proper coverage. Ensuring clean and accurate CODEOWNERS files with cleanowners While CODEOWNERS is a powerful tool for managing ownership information, maintaining it manually can be tedious and easily-overlooked. To address this challenge, the GitHub OSPO developed cleanowners: a GitHub Action that automates the process of keeping CODEOWNERS files clean and up to date. If it detects that something needs to change, it will open a pull request so this problem gets addressed sooner rather than later. Here’s how cleanowners works: --- name: Weekly codeowners cleanup on: workflow_dispatch: schedule: - cron: '3 2 1 * *' permissions: issues: write jobs: cleanowners: name: cleanowners runs-on: ubuntu-latest steps: - name: Run cleanowners action uses: github/cleanowners@v1 env: GH_TOKEN: ${{ secrets.GH_TOKEN }} ORGANIZATION: <YOUR_ORGANIZATION_GOES_HERE> This workflow, triggered by scheduled runs, ensures that the CODEOWNERS file is cleaned automatically. By leveraging cleanowners, maintainers can rest assured that ownership information is accurate, or it will be brought to the attention of the team via an automatic pull request requesting an update to the file. Here is an example where @zkoppert and @no-longer-in-this-org used to both be maintainers, but @no-longer-in-this-org has left the company and no longer maintains this repository. Dive in With tools like cleanowners, the task of managing CODEOWNERS files becomes actively managed instead of ignored, allowing maintainers to focus on what matters most: building and nurturing thriving software projects. By embracing clear and accurate ownership documentation practices, software projects can continue to flourish, guided by clear ownership and collaboration principles. Check out the repository for more information on how to configure and set up the action.
  20. Research shows that developers complete tasks 55% faster at higher quality when using GitHub Copilot, helping businesses accelerate the pace of software development and deliver more value to their customers. We understand that adopting new technologies in your business involves thorough evaluation and gaining cross functional alignment. To jump start your organization’s entry into the AI era, we’ve partnered with engineering leaders at some of the most influential companies in the world to create a new expert-guided GitHub Learning Pathway. This prescriptive content will help organizational leaders understand: What can your business achieve using GitHub Copilot? How does GitHub Copilot handle data? What are the best practices for creating an AI governance policy? How can my team successfully roll out GitHub Copilot to our developers? Along the way, you’ll also get tips and insights from engineering leaders at ASOS, Lyft, Cisco, CARIAD (a Volkswagen Group company), and more who have used GitHub Copilot to increase operational efficiency, deliver innovative products faster, and improve developer happiness! Start your GitHub Copilot Learning Pathway Select your GitHub Learning Pathway NEW! AI-powered development with GitHub Copilot From measuring the potential impact of GitHub Copilot on your business to understanding the essential elements of a GitHub Copilot rollout, we’ll walk you through everything you need to find success with integrating AI into your businesses’ software development lifecycle. CI/CD with GitHub Actions From building your first CI/CD workflow with GitHub Actions to enterprise-scale automation, you’ll learn how teams at leading organizations unlock productivity, reduce toil, and boost developer happiness. Application Security with GitHub Advanced Security Protect your codebase without blocking developer productivity with GitHub Advanced Security. You’ll learn how to get started in just a few clicks and move on to customizing GitHub Advanced Security to meet your organization’s unique needs. Administration and Governance with GitHub Enterprise Configure GitHub Enterprise Cloud to prevent downstream maintenance burdens while promoting innersource, collaboration, and efficient organizational structures, no matter the size and scale of your organization. Learning Pathways are organized into three modules: Essentials modules introduce key concepts and build a solid foundation of understanding. Intermediate modules expand beyond the basics and detail best practices for success. Advanced modules offer a starting point for building deep expertise in your use of GitHub. We are hard at work developing the next GitHub Copilot Learning Pathway module, which will include a deep dive into the nitty-gritty of working alongside your new AI pair programmer. We’ll cover best practices for prompt engineering and using GitHub Copilot to write tests and refactor code, among other topics. Are you ready to take your GitHub skills to the next level? Get started with GitHub Learning Pathways today.
  21. As a product manager working across multiple engineering teams, I spend a lot of time planning out and tracking the work involved for our upcoming releases to GitHub Projects. Each release comes with a set of cross-functional tasks that need to be completed, such as providing public documentation and performing a phased rollout, and there are many teams and stakeholders that need to be kept up to date of progress along the way to ensure it is successful. To best collaborate across diverse teams, we use GitHub Projects to plan, manage, and provide updates for each release right next to our code. Project templates allow us to standardize our process and create a reusable framework to ensure the success of each release, with status updates keeping stakeholders informed of progress along the way. Using project templates as a reusable framework Creating and using project templates ensures each upcoming release can get off the ground running quickly and is smooth sailing all the way to general availability. We don’t want to spend time manually setting up a new project or creating tasks for each release (given there are a lot of them!), so we build templates to track repeatable tasks and establish best practices for our teams. There are two kinds of templates we use to get started: Built-in templates, provided out of the box, are a good starting point for common projects and use cases. They serve as great examples we can either use directly or build off of and tailor to our needs. The “Team planning” and “Roadmap” templates are just a couple examples that have provided inspiration for how we manage our team backlogs and communicate our quarterly roadmap. Organization templates can be created by members in the organization and come with preconfigured views, custom fields, workflows, insights, and draft items. These are great for standardizing all of our organization’s project management workflows and can be created from scratch, copied from another project, converted from an existing project, and even recommended within an organization. While we have created templates that our teams use for our own feature release management, let’s dive into how teams across all of GitHub are utilizing project templates! How GitHub uses project templates At GitHub, we build project templates to standardize our workflows across the organization and create consistency within and across teams, with a growing collection of over 50 templates that we have built to help us accomplish our repeatable tasks. Some of these templates include: Product Feature Release to help teams kick off a new release and track cross-functional tasks across teams such as engineering, product, design, documentation, and social. This template is specifically helpful for my day to day and is used and refined as we learn from each release. Program Roadmap to help teams build and visually communicate their upcoming plans in a single place. GitHub Copilot Adoption Blueprint to help onboard and ensure the success of teams adopting and utilizing GitHub Copilot. Engineering Onboarding to provide a cohesive list of tasks that should be completed in the first weeks and months for a new engineer joining the organization. Bug Tracker to triage new bug reports that come through for first responders, prioritize fixes, and track progress so we can provide ongoing updates. Our template collection at GitHub continues to grow and expand to more use cases across teams and departments, so it’s one click and you’re off! Staying aligned with project status updates I use our “Product Feature Release” template to plan out and manage each upcoming release, so once we have officially kicked off and are working through our issues and pull requests, I want to make sure that all involved teams and stakeholders are kept in the loop on the progress along the way. To do this we use project status updates to keep everyone aligned on how the release is progressing, when it is expected, and any risks that we should be aware of, all in a single place in the project. Status updates allow us to provide short and regular summaries on the progress of the release, such as the Status, Start date, and Target date. We tend to provide additional high-level details such as: A brief summary of progress over the last week using @ mentions. Relevant metrics on early adoption and performance. Potential risks or upcoming challenges we should be aware of. Dependencies on other teams and workstreams that may impact our target release date. By providing regular updates, we have a feed of history so our stakeholders can follow along and understand why the target date shifted, why it shifted from On track to At risk, or what the cross-functional dependencies are. I can then see the status of all of my relevant work and projects all in a single place, so I can drill in to understand more details. The bottom line We are continuing to build our collections of project templates at GitHub to help us standardize our processes across teams, spanning from feature releases, employee onboarding, and building and sharing our roadmaps. We encourage teams and organizations to build their library of project templates to help establish and share best practices, and share status updates on their projects to easily communicate progress of those tasks with their teams and stakeholders all in a single place. Harness the power of GitHub Projects. Learn more or get started now. The post How we’re using GitHub Projects to standardize our workflows and stay aligned appeared first on The GitHub Blog. View the full article
  22. Since the early days of GitHub Copilot, our customers have asked us for a copilot that is customized to their own organization’s code and processes. Developers spend more time deciphering rather than shipping when they can’t pinpoint and solve the issues, bugs, or vulnerabilities that are unique to their organization’s codebase. What’s more, developers often write code for only a couple hours a day and, instead of being creative, are bogged down with mundane tasks throughout their day. The inaccessibility of institutional knowledge acts as a blockade preventing developers from fully exercising their creativity and building more for you. We’re changing that. Just by integrating generative AI into the editor, GitHub Copilot has quickly defined a new age of software development, resulting in clear gains of developer productivity and happiness. Today, we are bringing the next frontier of developer tools with the general availability of GitHub Copilot Enterprise–a companion that places the institutional knowledge of your organization at your developers fingertips. Now, team members can ask questions about public and private code, get up to speed quickly with new codebases, build greater consistencies across engineering teams, and ensure that everyone has access to the same standards and work that’s previously been done. Let’s jump in. A conversational and customized GitHub Copilot experience Ubiquitous, customized, and tailored to you Learn more about what’s included with GitHub Copilot Enterprise > GitHub Copilot Enterprise comes with three core features: 1. Gain a deeper understanding of your organization’s unique codebase. Copilot Enterprise streamlines code navigation and comprehension for developers, enabling faster feature implementation, issue resolution, and code modernization. It empowers junior developers to contribute quicker, assists senior developers in handling live incidents, and aids in modernizing aging codebases by offering clear code summaries, relevant suggestions, and quick answers to queries about code behavior. https://github.blog/wp-content/uploads/2024/02/BLOG1_issue-investigation_002.mp4 2. Quickly access organizational knowledge and best practices. Copilot Enterprise integrates chat directly into GitHub.com, enabling developers to ask questions and receive answers in natural language on your codebase, and will guide them to relevant documentation or existing solutions. This can facilitate rapid iteration at scale while improving code with personalized assistance and suggestions tailored to an organization’s specific codebase and standards. https://github.blog/wp-content/uploads/2024/02/BLOG2_chat-knowledge-base_002.mp4 3. Review pull requests faster. With generated pull request summaries, developers can spend less time drafting and more time merging. And with Copilot Enterprise’s capability to analyze pull request diffs, reviewers can quickly get up to speed with proposed changes and save time understanding the changes while spending more time providing valuable feedback. https://github.blog/wp-content/uploads/2024/02/BLOG3_pr-summary_001.mp4 As the technology landscape continues to rapidly evolve, we are expanding capabilities of GitHub Copilot to not only understand your own internal knowledge bases, but to bring in the latest information from the internet as well. By integrating Bing search directly into Copilot Chat—available in beta for GitHub Copilot Enterprise—you can find the latest software development-related information like updates to CSS or JavaScript frameworks. This means GitHub Copilot can now help your developers explore their curiosity and gain outside knowledge near instantly, at scale. https://github.blog/wp-content/uploads/2024/02/BLOG4_bing_001.mp4 We’re already hearing from engineering leaders about the benefits they’re seeing from Copilot Enterprise: In a large enterprise like TELUS, the main challenges are breaking silos and sharing collective knowledge. With Copilot Enterprise, our developers and non-developers can more quickly digest a codebase or pull request no matter the language or framework. This agility allows our teams to go deeper and broader in building products and improving our engineering foundations. - Wai Ho Choy, Technical Lead // TELUS With Copilot Enterprise, Copilot Chat provides personalized recommendations for our developers, making it easier for developers to quickly understand context. Already, our developers are accepting 24,000 lines of code every day with Copilot, enabling us to ship innovations to our customers faster. - Mark Côté, Director of Developer Infrastructure // Shopify Personalized, natural language recommendations are now at the fingertips of all our developers at Figma. Copilot Enterprise has improved collaboration across the SDLC by making it easier for our engineers to source and find information via Copilot Chat. We're also seeing a significant increase in overall developer productivity. Our engineers are coding faster, collaborating more effectively, and building better outcomes. - Tommy MacWilliam, Engineering Manager for Infrastructure // Figma GitHub Copilot is becoming an integral part of the developer experience. Its capabilities, such as quickly understanding existing codebases, analyzing code, and accessing knowledge bases, enables developers to concentrate more on what truly counts: delivering impactful results. And not only can they be more productive, developers will be more happy and fulfilled, too. Our vision extends to making conversational capabilities ubiquitous by integrating context-driven and customized assistance across the GitHub platform. Throughout the development and evolution of GitHub Copilot, we have always placed a priority on security, privacy, compliance, and transparency—and we’ve made that a key focus of GitHub Copilot Enterprise as well. We do not use any of your organization’s private repositories or prompts and suggestions to train the machine learning models that power our products, unless you expressly instruct us to do so, for example with custom models. Accenture research shows the productivity impact of GitHub Copilot in the enterprise Building on the proven success of GitHub Copilot In the last year, we collaborated with Accenture to evaluate the impact of GitHub Copilot on accelerating innovation within a real-world, enterprise environment. Here’s what we found: GitHub Copilot helped developers at Accenture stay in the flow and minimize interruptions. 94% of developers reported that using GitHub Copilot helped them remain in the flow and spend less effort on repetitive tasks. And 90% of developers spent less time searching for information. GitHub Copilot helped developers push better quality code to production. Developers retained 88% of the code suggested by GitHub Copilot in the editor and around 90% of developers reported that they committed code containing Copilot suggestions. GitHub Copilot helped developers write better code and upskill while working. 90% of developers reported writing better code with GitHub Copilot—and roughly 95% of developers learned from Copilot suggestions. And this was all just with GitHub Copilot as an autocomplete function in the editor. With GitHub Copilot Enterprise, we are building on top of the demonstrated results, dramatically multiplying existing GitHub Copilot productivity gains by adding extensive customization to enable organizations and engineering teams to accomplish more, faster and happier. Available today The age of copilots has begun. In this new frontier of software development, copilots are ubiquitous, customized, and always at your side. With GitHub Copilot Enterprise we’re bringing the industry’s premier AI developer tool available to every organization for just $39 per user per month. Built with the world’s leading large language model, customized to your organization, and deeply integrated into GitHub’s surfaces, GitHub Copilot Enterprise brings immense value to every organization. Alongside GitHub Enterprise, our end-to-end developer platform, organizations of any size can now start integrating generative AI across the software development lifecycle—from understanding existing code and internal best practices to fixing bugs and improving functionality to accelerated code reviews and beyond. Collaboration between humans and intelligent machines will redefine the possibilities of innovation, unlocking novel solutions, and accelerating the pace of software development like never before. Ready to harness the power of GitHub Copilot Enterprise today? Learn more or get started now. How to get GitHub Copilot Enterprise GitHub Copilot Enterprise comes with the same seat and policy management features as Copilot Business and requires that your organization is already using GitHub Enterprise Cloud. Here’s how to get started: If you’re an enterprise administrator: you can manage access for organizations within your enterprise, while organization administrators can handle access for teams and individuals within their organization. If you’re a developer: once you’re assigned a GitHub Copilot Enterprise seat, you’ll automatically see Copilot in GitHub Enterprise and GitHub Mobile interfaces, indicated by chat and smart actions buttons—no extra steps necessary. For IDE-based functionalities, you need to install the GitHub Copilot extension specific to their IDE. And if you want to use Copilot in the CLI, just use GitHub CLI in your terminal and install the Copilot in the CLI extension. Learn more >
  23. HashiCorp Nomad supports JWT authentication methods, which allow users to authenticate into Nomad using tokens that can be verified via public keys. Primarily, JWT auth methods are used for machine-to-machine authentication, while OIDC auth methods are used for human-to-machine authentication. This post explains how JWT authentication works and how to set it up in Nomad using a custom GitHub Action. The GitHub Action will use built-in GitHub identity tokens to obtain a short-lived Nomad token with limited permissions. How JWT-based authentication works The first step in JWT-based authentication is the JSON Web Token (JWT) itself. JWTs are encoded pieces of JSON that contain information about the identity of some workload or machine. JWT is a generic format, but for authentication, JWTs will sometimes conform to the more specific OIDC spec and include keys such as “sub”, “iss”, or “aud”. This example JWT decodes to the following JSON: { "jti": "eba60bec-a4e4-4787-9b16-20bed89d7092", "sub": "repo:mikenomitch/nomad-gha-jwt-auth:ref:refs/heads/main:repository_owner:mikenomitch:job_workflow_ref:mikenomitch/nomad-gha-jwt-auth/.github/workflows/github-actions-demo.yml@refs/heads/main:repository_id:621402301", "aud": "https://github.com/mikenomitch", "ref": "refs/heads/main", "sha": "1b568a7f1149e0699cbb89bd3e3ba040e26e5c0b", "repository": "mikenomitch/nomad-gha-jwt-auth", "repository_owner": "mikenomitch", "repository_owner_id": "2732204", "run_id": "5173139311", "run_number": "31", "run_attempt": "1", "repository_visibility": "public", "repository_id": "621402301", "actor_id": "2732204", "actor": "mikenomitch", "workflow": "Nomad GHA Demo", "head_ref": "", "base_ref": "", "event_name": "push", "ref_type": "branch", "workflow_ref": "mikenomitch/nomad-gha-jwt-auth/.github/workflows/github-actions-demo.yml@refs/heads/main", "workflow_sha": "1b568a7f1149e0699cbb89bd3e3ba040e26e5c0b", "job_workflow_ref": "mikenomitch/nomad-gha-jwt-auth/.github/workflows/github-actions-demo.yml@refs/heads/main", "job_workflow_sha": "1b568a7f1149e0699cbb89bd3e3ba040e26e5c0b", "runner_environment": "github-hosted", "iss": "https://token.actions.githubusercontent.com", "nbf": 1685937407, "exp": 1685938307, "iat": 1685938007 }(Note: If you ever want to decode or encode a JWT, jwt.io is a good tool.) This specific JWT contains information about a GitHub workflow, including an owner, a GitHub Action name, a repository, and a branch. That is because it was issued by GitHub and is an identity token, meaning it is supposed to be used to verify the identity of this workload. Each run in a GitHub Action can be provisioned with one of these JWTs. (More on how they can be used later in this blog post.) Importantly, aside from the information in the JSON, JWTs can be signed with a private key and verified with a public key. It is worth noting that while they are signed, their contents are still decodable by anybody, just not verified. The public keys for JWTs can sometimes be found at idiomatically well-known URLs, such as JSON Web Key Sets (JWKs) URLs. For example, these GitHub public keys can be used to verify their identity tokens. JWT authentication in Nomad Nomad can use external JWT identity tokens to issue its own Nomad ACL tokens with the JWT auth method. In order to set this up, Nomad needs: Roles and/or policies that define access based on identity An auth method that tells Nomad to trust JWTs from a specific source A binding rule that tells Nomad how to map information from that source into Nomad concepts, like roles and policies Here’s how to set up an authentication in Nomad to achieve the following rule: I want any repo using an action called Nomad JWT Auth to get a Nomad ACL token that grants the action permissions for all the Nomad policies assigned to a specific role for their GitHub organization. Tokens should be valid for only one hour, and the action should be valid only for the main branch. That may seem like a lot, but with Nomad JWT authentication, it’s actually fairly simple. In older versions of Nomad, complex authentication like this was impossible. This forced administrators into using long-lived tokens with very high levels of permissions. If a token was leaked, admins would have to manually rotate all of their tokens stored in external stores. This made Nomad less safe and harder to manage. Now, tokens can be short-lived and after a one-time setup with identity-based rules, users don’t have to worry about managing Nomad tokens for external applications. Setting up JWT authentication To set up the authentication, start by creating a simple policy that has write access to the namespace “app-dev” and another policy that has read access to the default namespace. Create a namespace called app-dev: nomad namespace apply "app-dev" Write a policy file called app-developer.policy.hcl: namespace "app-dev" { policy = "write" } Then create it with this CLI command: nomad acl policy apply -description "Access to app-dev namespace" app-developer app-developer.policy.hcl Write a policy file called default-read.policy.hcl: namespace "default" { policy = "read" }Then create it in the CLI: nomad acl policy apply -description "Read access to default namespace" default-read default-read.policy.hcl Next, create roles that have access to this policy. Often these roles are team-based, such as “engineering” or “ops”, but in this case, create a role with the name of “org-” then our Github organization’s name: mikenomitch. Repositories in this organization should be able to deploy to the “app-dev” namespace, and we should be able to set up a GitHub Action to deploy them on merge. Give this role access to the two new policies: nomad acl role create -name="org-mikenomitch" -policy=app-developer -policy=default-read Now, create a file defining an auth method for GitHub in auth-method.json: { "JWKSURL": "https://token.actions.githubusercontent.com/.well-known/jwks", "ExpirationLeeway": "1h", "ClockSkewLeeway": "1h", "ClaimMappings": { "repository_owner": "repo_owner", "repository_id": "repo_id", "workflow": "workflow", "ref": "ref" } }Then create it with the CLI: nomad acl auth-method create -name="github" -type="JWT" -max-token-ttl="1h" -token-locality=global -config "@auth-method.json" This tells Nomad to expect JWTs from GitHub, to verify them using the public key in JWKSURL, and to map key-value pairs found in the JWT to new names. This allows binding rules to be created using these values. A binding rule sets up the complex auth logic requirements stated in a block quote earlier in this post: nomad acl binding-rule create \ -description 'repo name mapped to role name, on main branch, for “Nomad JWT Auth workflow"' \ -auth-method 'github' \ -bind-type 'role' \ -bind-name 'org-${value.repo_owner}' \ -selector 'value.workflow == "Nomad JWT Auth" and value.ref == "refs/heads/main"'The selector field tells Nomad to match JWTs only with certain values in the ref, and workflow fields. The bind-type and bind-name fields tell Nomad to allow JWTs that match this selector to be matched to specific roles. In this case, they refer to roles that have a name matching the GitHub organization name. If you wanted more granular permissions, you could match role names to repository IDs using the repo_id field. So, the JWTs for repositories in the mikenomitch organization are given an ACL token with the role org-mikenomitch, which in turn grants access to the app-developer and default-read policies. Nomad auth with a custom GitHub Action Now you’re ready to use a custom GitHub Action to authenticate into Nomad. This will expose a short-lived Nomad token as an output, which can be used by another action that uses simple bash to deploy any files in the ./nomad-jobsdirectory to Nomad. The code for this action is very simple, it just calls Nomad’s /v1/acl/login endpoint specifying the GitHub auth method and passes in the GitHub Action’s JWT as the login token. (See the code.) To use this action, just push to GitHub with the following file at .github/workflows/github-actions-demo.yml name: Nomad JWT Auth on: push: branches: - main - master env: PRODUCT_VERSION: "1.7.2" NOMAD_ADDR: "https://my-nomad-addr:4646" jobs: Nomad-JWT-Auth: runs-on: ubuntu-latest permissions: id-token: write contents: read steps: - name: Checkout uses: actions/checkout@v3 - name: Setup `nomad` uses: lucasmelin/setup-nomad@v1 id: setup with: version: ${{ env.PRODUCT_VERSION }} - name: Auth Into Nomad id: nomad-jwt-auth uses: mikenomitch/nomad-jwt-auth@v0.1.0 with: url: ${{ env.NOMAD_ADDR }} caCertificate: ${{ secrets.NOMAD_CA_CERT }} continue-on-error: true - name: Deploy Jobs run: for file in ./nomad-jobs/*; do NOMAD_ADDR="${{ env.NOMAD_ADDR }}" NOMAD_TOKEN="${{ steps.nomad-jwt-auth.outputs.nomadToken }}" nomad run -detach "$file"; doneNow you have a simple CI/CD flow on GitHub Actions set up. This does not require manually managing tokens and is secured via identity-based rules and auto-expiring tokens. Possibilities for JWT authentication in Nomad With the JWT auth method, you can enable efficient workflows for tools like GitHub Actions, simplifying management of Nomad tokens for external applications. Machine-to-machine authentication is an important function in cloud infrastructure, yet implementing it correctly requires understanding several standards and protocols. Nomad’s introduction of JWT authentication methods provides the necessary building blocks to make setting up machine-to-machine auth simple. This auth method extends the authentication methods made available in Nomad 1.5, which introduced SSO and OIDC support. As organizations move towards zero trust security, Nomad users now have more choices when implementing access to their critical infrastructure. To learn more about how HashiCorp provides a solid foundation for companies to safely migrate and secure their infrastructure, applications, and data as they move to a multi-cloud world, visit our zero trust security page. To try the feature described in this post, download the latest version of HashiCorp Nomad. View the full article
  24. This week on KDnuggets: Discover GitHub repositories from machine learning courses, bootcamps, books, tools, interview questions, cheat sheets, MLOps platforms, and more to master ML and secure your dream job • Data engineers must prepare and manage the infrastructure and tools necessary for the whole data workflow in a data-driven company • And much, much more!View the full article
  25. The blog covers machine learning courses, bootcamps, books, tools, interview questions, cheat sheets, MLOps platforms, and more to master ML and secure your dream job.View the full article
  • Forum Statistics

    43k
    Total Topics
    42.3k
    Total Posts
×
×
  • Create New...