Jump to content

Search the Community

Showing results for tags 'github'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. GitHub Actions are great at enabling you to automate workflows directly from within a GitHub repository. The workflows are stored in a YAML definition file located within the .github/workflows directory within the repository. GitHub Actions can be used to configure workflows that can perform a variety of actions to perform build and release steps. On […] The article GitHub Actions: Commit and Push Changes Back to Repository appeared first on Build5Nines. View the full article
  2. Battle of the Gits? Well, not so much a battle as Git, GitHub, and GitLab act as complementary tools in the software development lifecycle. Git forms the foundational backbone of version control, while GitHub and GitLab build upon its capabilities, offering a comprehensive ecosystem for developers. Though distinct, these tools work in harmony to streamline the development process. In this article, we'll explore the differences and similarities between each and guide you on when to leverage them based on your project's needs. Key Takeaways Git, GitHub, and GitLab work together to enhance the software development process, each offering unique features.GitHub is popular for open-source projects and community-driven development.GitLab stands out as an integrated DevOps platform with comprehensive CI/CD pipelines and security features..What is Git?Git is a free and open-source distributed version control system designed to handle projects of any size with speed and efficiency. Unlike centralized systems, Git allows developers to work independently with a full copy of the codebase on their local machines. Git’s Key FeaturesLet's explore the standout features that make Git an indispensable tool for modern software teams: Branching and Merging: Git's powerful branching model enables developers to create separate branches for features, bug fixes, or experiments, seamlessly merging them back into the main codebase after review.Distributed nature: Every developer has a complete local repository, eliminating single points of failure and enabling offline work.Staging area: The staging area provides granular control over what changes are committed, enhancing code organization and ensuring only intended modifications are recorded.Lightweight and fast: Git's efficient design allows for lightning-fast performance, even with large codebases and complex projects.While Git excels at managing source code, let's look at how platforms like GitHub and GitLab build upon its capabilities. To learn more about Git, check out the blog How Git Works. What is GitHub?GitHub is a cloud-based hosting service that provides a user-friendly web interface for managing Git repositories. It allows developers to store, share, and collaborate on their codebase with teams or the open-source community. In 2018, GitHub was acquired by Microsoft, further solidifying its position as a leading platform for software development. GitHub’s Key FeaturesHere are the standout features that make GitHub a powerful addition to the Git ecosystem: Remote repository hosting: GitHub's core functionality is hosting Git repositories remotely, providing a centralized location for developers to push and pull code changes.Collaboration and Social coding: GitHub fosters collaboration by allowing developers to follow projects, contribute code, and interact through discussions, issues, and pull requests.Issue tracking: GitHub's issue tracking system enables teams to report bugs, propose new features, and manage project tasks effectively.Pull requests and Code review: GitHub's pull request mechanism streamlines the code review process, allowing developers to propose changes, receive feedback, and merge code into the main codebase.Project management tools: GitHub offers integrated project management tools, such as boards, wikis, and project tracking, to help teams organize and manage their development workflows.Check out our blog post to learn How GitHub Works. What is GitLab?GitLab is a web-based platform that streamlines development workflows. It does this by merging Git repository management with continuous integration (CI), deployment, and collaboration tools. GitLab facilitates code versioning and team cooperation and automates the pipeline from development to deployment, simplifying the entire software lifecycle within its unified platform. GitLab’s Key FeaturesLet's explore the standout features that make GitLab a powerful DevOps platform: Repository hosting (similar to GitHub): Like GitHub, GitLab provides a central location for hosting Git repositories, enabling teams to collaborate on code and manage version control.Continuous Integration/Continuous Deployment (CI/CD): One of GitLab's standout features is its built-in CI/CD pipelines, allowing teams to automate the entire software delivery process, from code commit to production deployment.Issue tracking and project management: GitLab offers robust issue tracking and project management tools, helping teams organize and prioritize tasks, bugs, and feature requests.Code review and collaboration: Similar to GitHub's pull requests, GitLab's merge requests facilitate code review and collaboration among team members, ensuring code quality and consistency.Integrated DevOps tools: GitLab provides a comprehensive DevOps toolchain, including features for container management, monitoring, and security scanning. This streamlines the entire development lifecycle within a single platform.With a strong focus on DevOps practices and an integrated toolset, GitLab caters to organizations seeking a more seamless and automated software delivery process. Git vs. GitHub vs. GitLabAs we've explored the individual capabilities of the 3 platforms, it's essential to understand their distinctions and commonalities. The following table provides a high-level comparison across various features and aspects: Table: Comparison of Git, GitHub, and GitLab Feature Git GitHub GitLab Type Version Control System Git Repository Hosting Service Integrated DevOps Platform Primary Use Local version control Remote repository hosting, collaboration, and code sharing Comprehensive software development, CI/CD, and collaboration Hosting Local and self-hosted Primarily cloud-hosted (GitHub servers), some self-hosting options Cloud-hosted (GitLab.com) and self-hosted options; supports hybrid models CI/CD Integration Not built-in; requires third-party tools GitHub Actions (robust CI/CD tool) Comprehensive CI/CD pipelines and automation Access Control Basic through Git hooks and server configuration Detailed access control with teams, role-based permissions, and collaboration features Detailed access control, including group and subgroup management, fine-grained permissions, and protected branches License Open Source (GPLv2) Proprietary with some open-source projects Open-source (Core) and proprietary (Premium) editions Community Features None Issue tracking, discussions, wikis, collaboration features (forks, pull requests) Similar to GitHub with additional DevOps project management tools (boards, milestones) Integration Requires external tools for additional functionality Wide range of integrations through GitHub Marketplace Comprehensive integrations within its DevOps ecosystem, including third-party tools and services Pricing/Cost Model Free Free for public repositories, paid plans for private repositories, and additional features Free (Core), paid plans for Premium features, self-hosted pricing available As evident from the table above, Git, GitHub, and GitLab share a common foundation: Git. However, the key differences emerge in the following areas: Purpose and Focus: While Git is solely dedicated to version control, GitHub caters to social coding and open-source communities, and GitLab sets itself apart as an integrated DevOps platform, offering comprehensive CI/CD pipelines and a seamless toolchain for the entire software development lifecycle. DevOps and CI/CD Integration: GitLab stands out with its extensive built-in CI/CD capabilities and automation, allowing teams to streamline their software delivery processes. GitHub offers robust CI/CD features through Actions. Collaboration and Project Management: Both GitHub and GitLab provide robust collaboration tools, including issue tracking, code reviews, and project management features. However, GitLab offers additional DevOps-specific project management tools, such as boards and milestones. When to use Git, GitHub, or GitLabThis section will guide you through the common situations where one tool might be preferred over the others. Git: Essential for Any Project Involving Version ControlGit is the foundational version control system that underpins the software development process for countless projects worldwide. Regardless of the project's size or complexity, Git is an indispensable tool. Whether you're a solo developer or part of a large team, Git is a must-have tool in your development arsenal. GitHub: Popular Choice for Open-Source Projects and Public RepositoriesGitHub’s vibrant community, social coding features, and seamless collaboration capabilities make it an attractive choice for developers and teams looking to contribute to or leverage open-source software. If your project involves open-source development, GitHub can streamline your workflows and foster effective collaboration. GitLab: Ideal for Secure and Comprehensive DevOpsGitLab shines as the preferred choice for organizations, as well as those seeking a comprehensive DevOps toolchain. If your organization requires advanced DevOps capabilities, GitLab's toolset can help you achieve a seamless development lifecycle. Additionally, its support for self-hosting and hybrid deployment models ensures that you can meet your organization's specific security needs. ConclusionIn software development, the choice of tools is paramount to delivering high-quality software. By leveraging the complementary nature of Git, GitHub, and GitLab, you can create a development ecosystem that seamlessly integrates version control, collaboration, and DevOps practices, enabling your team to focus on delivering high-quality software solutions that drive innovation and success. Enroll in our Git for Beginners course to learn and practice more Git concepts. View the full article
  3. Automotive software development moves to the cloud We are at an inflection point for automotive embedded development to move to the cloud. In an era where software has not just eaten the world but is continuously redefining it through AI, the cloud emerges not just as a platform but as the foundational fabric for software engineering. With AI’s increasing demand for computational power driving unprecedented changes in silicon, both at the edge and in the cloud, the need for agile, scalable, and continuously optimized development environments has never been more critical. As the home of the world’s developers, GitHub is the platform to build the next generation of automotive and embedded development environments in the cloud. Traditional embedded development challenges Improving the developer experience is at the heart of what GitHub does. We’re dedicated to making coding as smooth as possible by reducing unnecessary complexity. The traditional process for developers working with embedded systems has plenty of friction to remove. Historically, software development has been very hardware-dependent with developers maintaining some combination of test hardware connected to their development machines or an in-house testing farm. There weren’t many alternatives because so much was proprietary. In recent years, a series of technical advancements have significantly influenced the foundational architectures within the field. Despite these changes, many traditional methods and operational processes remain in use. Key developments include the adoption of more powerful multipurpose processors, the establishment of open standards for the lower-level software stack such as SOAFEE.io for cloud native architecture at the edge, and the increased reliance on open-source resources, facilitating reuse across different domains. These innovations have provided developers with the opportunity to fundamentally rethink their approaches to development, enabling more efficient and flexible strategies. As the rate of these technical trends and foundational change increases, teams are finding it increasingly difficult to deliver application commitments without significant cost of maintaining these in-house development and test environments. See how Scalable Open Architecture For Embedded Edge (SOAFEE), an industry-led collaboration between companies across the automotive and technology sectors, is working to radically simplify vehicle software solutions. Virtualization for embedded and automotive development While virtualization has become a cornerstone of enterprise development, its integration into embedded systems has proceeded at a more cautious pace. The complexities inherent in embedded systems—spanning a vast array of processors, operating systems, and specialized software—pose unique challenges not encountered in the more homogeneous environments of data centers and IT networks. Embedded systems require a nuanced approach to virtualization that goes beyond simply accommodating mainstream operating systems like Windows and Linux on standard Intel architectures. In a significant development that reflects the evolving landscape of embedded systems, in March 2024, Arm unveiled its new Automotive Enhanced (AE) processors. These cutting-edge processors are designed to boost AI capabilities within the automotive sector, ensuring ISA (Instruction Set Architecture) compatibility. This advancement is poised to revolutionize the way applications are developed and deployed, enabling developers to create software in the cloud and seamlessly transition it to the edge, such as in vehicles, without the need for extensive reconfiguration or modification. This leap forward promises to accelerate the time-to-market for new applications, bridging the gap between cloud development environments and the nuanced world of embedded systems . This transition exemplifies how advancements in processor technology and virtualization are converging to address the unique challenges of embedded development, paving the way for more integrated and efficient systems across industries. Developers will be able to write, build, and test code in the cloud and then run their applications in virtualized environments with digital twins that mirror their processor targets, even if those targets haven’t even been delivered in the silicon. Cloud-based continuous integration platform Continuous integration (CI), a cornerstone of agile methodologies for over two decades, automates the build, test, and deployment processes. This automation accelerates feedback loops, enabling timely verification that the software meets the intended requirements. It also minimizes integration risks and enhances the early detection of defects and security vulnerabilities. While surveys indicate that many embedded development teams have adopted CI as a practice, managing the development environments across multiple hardware configurations and deployment targets is costly and complex. Implementing CI/CD in a cloud environment leverages the well-established advantages of cloud computing for embedded engineering teams, significantly enhancing their ability to deliver high-quality products within tight market timelines. Enhanced Scalability. Cloud-based CI allows teams to dynamically allocate resources and optimize compute spend. Teams can execute workloads in parallel in order to support multiple hardware and software configurations simultaneously. Developers can also participate across geographic regions or even across organizational boundaries within the supply chain. Reduced Complexity. Standardizing on cloud-based CI reduces environment setup and tear down times and promotes consistency. Workflows can easily be shared across teams. Improved Quality. When compute resources are too constrained or managing the CI environment is brittle, teams may optimize locally onto too narrow a piece of the development. Reducing this friction and thereby increasing the end to end feedback loops can improve quality. To deliver cloud-based embedded developer environments for the design and build time that feed into the runtime virtualized and simulated targets, GitHub needed to update our infrastructure. In October 2023, GitHub announced native Arm64 support for our hosted CI/CD workflow engine, GitHub Actions. Supporting this platform is important because Arm’s family of processor designs are central to many uses in the embedded and automotive world. This promises to free embedded developers from being tied to the desktop. By moving jobs to the cloud, development teams will be able to focus more on coding time and less on infrastructure management. We also recently announced the public beta of GPU hosted runners that will enable teams building machine learning models to do complete application testing, including the ML components within GitHub Actions. Conclusion The convergence of cloud technologies, advanced virtualization, and cutting-edge processor innovations represents a transformative shift in automotive software development. To further advance and support these transformations across the industry, GitHub has recently joined SOAFEE.io, as well as maintaining our membership in the Connected Vehicle Systems Alliance (COVESA) and supporting Microsoft’s commitment to the Eclipse Software Defined Vehicle project. GitHub Enterprise Cloud, along with Arm’s latest AE processors, heralds a new era where development and testing transcend traditional boundaries, leveraging the cloud’s vast resources for more efficient, scalable, and flexible software creation. This paradigm shift towards cloud-based development and virtualized testing environments not only addresses the complexities and limitations of embedded system design but also dramatically reduces the overhead associated with physical hardware dependencies. By enabling developers to seamlessly transition applications from the cloud to the edge without extensive rework, the automotive industry stands on the brink of a significant acceleration in innovation and time-to-market for new technologies. GitHub’s introduction of native Arm64 support and the public beta of GPU hosted runners on its CI/CD platform, GitHub Actions, further underscores this transition. These advancements ensure that the embedded and automotive development communities can fully harness the cloud’s potential, facilitating a shift from local, hardware-constrained development processes to a more agile, cloud-centric approach. As a result, developers can focus more on innovation and less on the intricacies of hardware management, propelling the automotive sector into a future where software development is more integrated, dynamic, and responsive to the rapidly evolving demands of technology and consumers. This transition not only signifies a leap forward in how automotive software is developed but also reflects a broader trend towards the cloud as the backbone of modern software engineering across industries. Learn more about GitHub-hosted runners and look for the public beta for Arm-hosted runners coming later this year.
  4. AWS CodeBuild now supports managed GitHub Action self-hosted runners. Customers can configure their CodeBuild projects to receive GitHub Actions workflow job events and run them on CodeBuild ephemeral hosts. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment. View the full article
  5. This is abridged content from November 2023’s Insider newsletter. Like what you see? Sign up for the newsletter to receive complete, unabridged content in your inbox twice a month. Sign up now > Did you know that just about every page on GitHub has a keyboard shortcut? In this blog post, we’ll uncover the world of GitHub keyboard shortcuts and how they can help you navigate and perform actions swiftly. After reading this post, you’ll be able to: Master the shortcuts. You might be asking, how can I access said shortcuts? Simply by typing “?” on any Github page!* These shortcuts will empower you to perform various actions across the site without relying on your mouse. Customize your experience. You can tailor your shortcut experience by enabling or disabling character key shortcuts according to your preferences, all within your accessibility settings. For more information, see “Managing accessibility settings.” Make magic. With the GitHub Command Palette, you can effortlessly navigate, search, and execute commands on GitHub—all without the need to memorize multiple keyboard combinations. To open the command palette, type in this combination: Windows and Linux: “Ctrl+K” or “Ctrl+Alt+K” Mac: “Command+K” or “Command+Option+K” Please note: not all shortcuts are available on every page. When you open the shortcut window (?), it will provide you with the available keyboard shortcuts. A gif of the author playing around with the notifications keyboard shortcut—and loving it! Ready to give your mouse a break? Let’s dive into some top keyboard shortcuts to get you started. Navigation Tap these keys to navigate your way around our platform with ease: T: Quick access to “File Finder.” W: Close the currently open tab or pull request. S: Focus on the site search bar. G, P: Jump to your profile. Repository navigation These shortcuts will guide you through your repositories: G, I: Jump to your issues. G, P: Navigate to your pull requests. G, B: Head to your repository. G, C: Visit your repository’s code. Issues and pull requests Spin up issues and pull requests with one single keystroke: C: Create a new issue. Y: Close an issue or pull request. R: Reopen a closed issue or pull request. K: Move up the discussion timeline. J: Move down the discussion timeline. Search Quickly spin up a search bar to find what you need right when you need it: /: Start a quick search. S: Focus on the site search bar. F: Search within the code in a repository. T: File finder for code search. W: View code in a workspace. Notifications Stay on top of your projects with a hop on over to your notifications: G, N: Go to your notifications. Create and submit Spin up a new repository or view your issues in a flash: N: Create a new repository. I: Go to your issues. P: Navigate to your pull requests. B: Visit your repository. Security Keep abreast of your security posture by navigating to your settings with ease: G, S: Navigate to your security settings. With these keyboard shortcuts under your belt, you’ll become a GitHub power user in no time. And remember, you don’t have to commit all of these to memory—the GitHub Command Palette has all that covered for you. Want to know what other GitHub users’ favorite keyboard shortcuts are? Take a look through the comments on this video. And to further boost your productivity on GitHub, you can explore GitHub Actions—an integrated automation and CI/CD service within your repositories. GitHub Actions streamlines code-related tasks and development by defining event-triggered workflows; check it out today! Get started with GitHub Actions. Want to receive content like this twice a month, right in your inbox? Sign up for the newsletter now >
  6. Imagine arriving at a conference and immediately feeling inspired: your agenda is packed with must-see GitHub Copilot sessions, booths are filled with experts from top tech companies, and you’re surrounded by thousands of fellow developers and leaders who are eager to connect. That is the experience we’re curating for the 10th anniversary of our global developer event. This year, we’re going bigger and better with a stunning new venue as the foundation. We hope you’ll join us at the Fort Mason Center for Arts & Culture on the San Francisco Bay, from October 29-30, or virtually from anywhere in the world. As the world’s fair of software, GitHub Universe 2024 will be an unparalleled gathering of the brightest minds, companies, and innovators in the industry. With sessions diving into AI, the developer experience (DevEx), and security, attendees will have an opportunity to explore the latest products, best practices, and insights shaping the future of software development. Ready to be a part of this milestone event with us? In-person tickets are currently 35% off with our Super Early Bird discount, only from now until July 8. Get tickets Universe 2024: Where innovation meets fun, food, and connection We take your experience as a Universe attendee very seriously. From the moment you step through the colorful gates right down to the beverages we serve, our 10th anniversary event will blow your expectations out of the water. Spread across a sprawling 13-acre waterfront compound, Universe will unfold across seven buildings and various outdoor areas. With five stages hosting more than 100 sessions and 150 speakers, alongside a record-breaking 3,500 attendees (that’s over 50% more in-person attendees than last year!), this will be our biggest Universe yet. During breakfast and lunch, you’ll indulge in food trucks, snacks, and beverages—all included in the price of your in-person ticket. And don’t forget to explore the GitHub Shop for the latest Universe swag and join us for lively happy hours sponsored by our partners. Click to view slideshow. Everything you’ll learn at our global developer event Attending Universe is an investment in your business and your career. It’s easier than ever to be in charge of your growth with our beginner, intermediate, and advanced session topics curated to what developers and enterprises care about most. As an in-person attendee, you’ll also be able to take advantage of two ticket add-ons: GitHub Certification testing and workshops, available onsite! Take what you learn during your sessions and practice them IRL alongside your industry peers. You can secure your spot for workshops and certifications when you purchase your in-person ticket. Don’t miss out—these opportunities will go fast! If you’re interested in attending Universe as a speaker instead, now is your chance! The call for sessions (CFS) is now open. Learn about the super cool perks Universe speakers get and submit a session proposal by May 10 to be considered. (And yes, you’ll get a speaker honorarium to cover travel costs if selected!) Here’s a sneak peek of the themes we have in store. AI content track This track will delve into: The impact of AI on software development life cycles. Practical uses like automating pull requests and using AI code generation tools like GitHub Copilot for onboarding and productivity gains. Optimizing AI outputs, crafting AI policies, and fostering responsible AI deployment while evolving skill sets for success in the AI era. DevEx content track Learn about the following within this track: How the GitHub platform enhances platform engineering teams’ autonomy and efficiency. The significance of investing in developer experience for fostering innovation and efficiency within organizations. Strategies for effectively engaging with open source communities. Security content track Come away from this track with a better understanding of: Transforming application security with AI-powered vulnerability fixes. How to delegate the task of prioritizing and fixing security debt to AI. Leveraging open source to enhance code security while mitigating potential vulnerabilities. Will you celebrate 10 years of GitHub Universe with us? Whether you’re a leader interested in connecting with and learning from other industry executives, a manager hoping to propel your team’s productivity to new heights, or a developer looking to acquire new skills and further your career, Universe has something for you. Are you in? Get your in-person tickets 35% off while supplies last, or join us virtually for free!
  7. In March, we experienced two incidents that resulted in degraded performance across GitHub services. March 15 19:42 UTC (lasting 42 minutes) On March 15, GitHub experienced service degradation from 19:42 to 20:24 UTC due to a regression in the permissions system. This regression caused failures in GitHub Codespaces, GitHub Actions, and GitHub Pages. The problem stemmed from a framework upgrade that introduced MySQL query syntax that is incompatible with the database proxy service used in some production clusters. GitHub responded by rolling back the deployment and fixing a misconfiguration in development and CI environments to prevent similar issues in the future. March 11 22:45 UTC (lasting 2 hours and 3 minutes) On March 11, GitHub experienced service degradation from 22:45 to 00:48 UTC due to an inadvertent deployment of network configuration to the wrong environment. This led to intermittent errors in various services, including API requests, GitHub Copilot, GitHub secret scanning, and 2FA using GitHub Mobile. The issue was detected within 4 minutes, and a rollback was initiated immediately. The majority of impact was mitigated by 22:54 UTC. However, the rollback failed in one data center due to system-created configuration records missing a required field, causing 0.4% of requests to continue failing. Full rollback was successful after manual intervention to correct the configuration data, enabling full service restoration by 00:48 UTC. GitHub has implemented measures for safer configuration changes, such as prevention and automatic cleanup of obsolete configuration and faster issue detection, to prevent similar issues in the future. Please follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the GitHub Engineering Blog. The post GitHub Availability Report: March 2024 appeared first on The GitHub Blog. View the full article
  8. Learn Python through tutorials, blogs, books, project work, and exercises. Access all of it on GitHub for free and join a supportive open-source community.View the full article
  9. Just recently, I was coding a new feature for GitHub Copilot Chat. My task was to enable the chat to recognize a user’s project dependencies, allowing it to provide magical answers when the user poses a question. While I could have easily listed the project dependencies and considered the task complete, I knew that to extract top-notch responses from these large language models, I needed to be careful to not overload the prompt to avoid confusing the model by providing too much context. This meant pre-processing the dependency list and selecting the most relevant ones to include in the chat prompt. Creating machine-processable formats for the most prominent frameworks across various programming languages would have consumed days. It was during this time that I experienced one of those “Copilot moments.” I simply queried the chat in my IDE: Look at the data structure I have selected and create at least 10 examples that conform to the data structure. The data should cover the most prominent frameworks for the Go programming language. Voilà, there it was my initial batch of machine-processable dependencies. Just 30 minutes later, I had amassed a comprehensive collection of significant dependencies for nearly all supported languages, complete with parameterized unit tests. Completing a task that would likely have taken days without GitHub Copilot, in just 30 minutes, was truly remarkable. This led me to ponder: what other “Copilot moments” might my colleagues here at GitHub have experienced? Thus, here are a few ways we use GitHub Copilot at GitHub. 1. Semi-automating repetitive tasks Semi-automating repetitive tasks is a topic that resonates with a colleague of mine from another team. He mentions that they are tasked with developing and maintaining several live services, many of which utilize protocol buffers for data communication. During maintenance, they often encounter a situation where they need to increment ID numbers in the protobuf definitions, as illustrated in the code snippet below: protobuf google.protobuf.StringValue fetcher = 130 [(opts.cts_opt)={src:"Properties" key:"fetcher"}]; google.protobuf.StringValue proxy_enabled = 131 [(opts.cts_opt)={src:"Properties" key:"proxy_enabled"}]; google.protobuf.StringValue proxy_auth = 132 [(opts.cts_opt)={src:"Properties" key:"proxy_auth"}]; He particularly appreciates having GitHub Copilot completions in the editor for these tasks. It serves as a significant time saver, eliminating the need to manually generate ID numbers. Instead, one can simply tab through the completion suggestions until the task is complete. 2. Avoid getting side tracked Here’s another intriguing use case I heard about from a colleague. He needed to devise a regular expression to capture a Markdown code block and extract the language identifier. Fully immersed in his work, he preferred not to interrupt his flow by switching to chat, even though it could have provided a solution. Instead, he employed a creative approach by formalizing his task in a code comment: // The string above contains a code block with a language identifier. // Create a regexp that matches the code block and captures the language identifier. // Use tagged capture groups for the language and the code. This prompted GitHub Copilot to generate the regular expression as the subsequent statement in his editor: const re = /```(?<lang>\w+)(?<code>[\s\S]+?)```/; With the comment deleted, the task was swiftly accomplished! 3. Structuring data-related notes During a pleasant coffee chat, one of our support engineers shared an incident she experienced with a colleague last week. It was a Friday afternoon, and they were attempting to troubleshoot an issue for a specific customer. Eventually, they pinpointed the solution by creating various notes in VSCode. At GitHub, we prioritize remote collaboration. Thus, merely resolving the task wasn’t sufficient; it was also essential to inform our colleagues about the process to ensure the best possible experience for future customer requests. Consequently, even after completing this exhaustive task, they needed to document how they arrived at the solution. She initiated GitHub Copilot Chat and simply typed something along the lines of, “Organize my notes, structure them, and compile the data in the editor into Markdown tables.” Within seconds, the task was completed, allowing them to commence their well-deserved weekend. 4. Exploring and learning Enhancing and acquiring new skills are integral aspects of every engineer’s journey. John Berryman, a colleague of mine, undertook the challenge of leveraging GitHub Copilot to tackle a non-trivial coding task in a completely unfamiliar programming language. His goal was to delve into Rust, so on a Sunday, he embarked on this endeavor with the assistance of GitHub Copilot Chat. The task he set out to accomplish was to develop a program capable of converting any numerical input into its written English equivalent. While initially seeming straightforward, this task presented various complexities such as handling teen numbers, naming conventions for tens, placement of “and” in the output, and more. Twenty-three minutes and nine seconds later, he successfully produced a functional version written in Rust, despite having no prior experience with the language. Notably, he documented his entire process, recording himself throughout the endeavor. https://github.blog/wp-content/uploads/2024/04/rust_from_scratch_720-1.mp4 Berryman uses an older, experimental version of GitHub Copilot to write a program in Rust. Your very own GitHub Copilot moment I found it incredibly enlightening to discover how my fellow Hubbers utilize GitHub Copilot, and their innovative approaches inspired me to incorporate some of their ideas into my daily workflows. If you’re eager to explore GitHub Copilot firsthand, getting started is a breeze. Simply install it into your preferred editor and ask away. The post 4 ways GitHub engineers use GitHub Copilot appeared first on The GitHub Blog. View the full article
  10. We have a few new updates to announce for the work we have been doing to improve the Azure Boards + GitHub experience. Let’s jump right into it… Add link to GitHub commit or pull request (GA) After being several weeks in preview, we are excited to announce our new enhanced experience for linking work items to GitHub. You can now search and select the desired repository and drill down to find and link to the specific pull request or commit. No more need for multiple window changes and copy/paste (although you still have that option). This feature is only available in the New Boards Hub preview. GitHub connection improvements (private preview) For GitHub organizations that have thousands of repositories, connecting them to an Azure DevOps project has posed significant challenges. Previously, attempts to connect encountered timeout issues, preventing you from integrating GitHub with Azure Boards. Today we are announcing a preview that will unblock large GitHub organizations. You will now be able to search and select across thousands of repositories without the risk of timeout issues. We are happy to enable to this feature upon request. If you are interested, please send us your Azure DevOps organization name (dev.azure.com/{organization}). AB# links on GitHub pull request (private preview) As part of our ongoing enhancements to Azure Boards + GitHub integration, we’re introducing a private preview feature that enhances the experience with AB# links. With this update, your AB# links will now appear directly in the Development section of GitHub pull requests. This means you can view the linked work items without the need to navigate through the pull request description or comments, resulting in a more intuitive experience. Please note that these links will only be accessible if you use AB# in the pull request description. They will not appear if you link directly to the pull request from the work item in Azure DevOps. Removing the AB# link from the description will also remove it from the Development control. If you’re interested in participating in the preview, kindly reach out to us directly via email. Please include your GitHub organization name (https://github.com/{organization}). Summary We are excited to continue to bring these (and more coming) new Boards + GitHub integration features to customers. As always, we love it when folks can get early access and provide feedback. Please follow the links above to enroll and take advantage of these private previews. Click here to learn more about our Boards + GitHub integration roadmap. The post AB links on GitHub pull request and scale improvements for large organizations appeared first on Azure DevOps Blog. View the full article
  11. These GitHub repositories provide valuable resources for mastering computer science, including comprehensive roadmaps, free books and courses, tutorials, and hands-on coding exercises to help you gain the skills and knowledge necessary to thrive in the ever-evolving field of technology.View the full article
  12. Hello fellow readers! Have you ever wondered how the GitHub Security Lab performs security research? In this post, you’ll learn how we leverage GitHub products and features such as code scanning, CodeQL, Codespaces, and private vulnerability reporting. By the time we conclude, you’ll have mastered the art of swiftly configuring a clean, temporary environment for the discovery, verification, and disclosure of vulnerabilities in open source software (OSS). As you explore the contents of this post, you’ll notice we cover a wide array of GitHub tooling. If you have any feedback or questions, we encourage you to engage with our community discussions. Rest assured, this post is designed to be accessible to readers regardless of their prior familiarity with the tools we’ve mentioned. So, let’s embark on this journey together! Finding an interesting target The concept of an “interesting” target might have different meanings for each one of you based on the objective of your research. In order to find an “interesting” target, and also for this to be fun, you have to write down some filters first—unless you really want to dive into anything! From the language the project is written in, through the surface it unveils (is it an app? a framework?), every aspect is important to have a clear objective. Using GitHub Code Search Many times, we need to search widely for the use of a specific method or library. Either to get inspiration to use it, or pwn it , GitHub code search is there for us. We can use this feature to search across all public GitHub repositories with language, path, and regular expression filters! For instance, see this search query to find uses of readObject in Java files. For example, usually one of these aspects is the amount of people using the project (that is, the ones affected if a vulnerability occurred), which is provided by GitHub’s dependency network (for example, pytorch/pytorch), but it does not end there: we are also interested in how often the project is updated, the amount of stars, recent contributors, etc. Fortunately for us, some very smart people over at the Open Source Security Foundation (OpenSSF) already did some heavy work on this topic. OpenSSF Criticality Score The OpenSSF created the Open Source Project Criticality Score, which “defines the influence and importance of a project. It is a number between 0 (least-critical) and 1 (most-critical).” For further information on the specifics of the scoring algorithm, they can be found on the ossf/criticality_score repository or this post. A few months after the launch, Google collected information for the top 100k GitHub repositories and shared it in this spreadsheet. Within the GitHub Security Lab, we are continuously analyzing OSS projects with the goal of keeping the software ecosystem safe, focusing on high-profile projects we all depend on and rely on. In order to find the former, we base our target lists on the OpenSSF criticality score. The beginning of the process We published our Code Review of Frigate in which we exploited a deserialization of user-controlled data using PyYaml’s default Loader. It’s a great project to use as the running example in this blog post, given its >1.6 million downloads of Frigate container at the time of writing and the ease of the setup process. The original issue We won’t be finding new vulnerabilities in this blog post. Instead, we will use the deserialization of user-controlled data issue we reported to illustrate this post. Looking at the spreadsheet above, Frigate is listed at ~16k with a 0.45024 score, which is not yet deemed critical (>0.8), but not bad for almost two years ago! If you are curious and want to learn a bit more about calculating criticality scores, go ahead and calculate Frigate’s current score with ossf/criticality_score. Forking the project Once we have identified our target, let’s fork the repository either via GitHub’s UI or CLI. gh repo fork blakeblackshear/frigate --default-branch-only Once forked, let’s go back to the state in which we performed the audit: (sha=9185753322cc594b99509e9234c60647e70fae6f) Using GitHub’s API update a reference: gh api -X PATCH /repos/username/frigate/git/refs/heads/dev -F sha=9185753322cc594b99509e9234c60647e70fae6f -F force=true Or using git: git clone https://github.com/username/frigate cd frigate git checkout 9185753322cc594b99509e9234c60647e70fae6f git push origin HEAD:dev --force Now we are ready to continue! Code scanning and CodeQL Code scanning is GitHub’s solution to find, triage, and prioritize fixes for existing problems in your code. Code scanning alerts in the Security tab, provided by CodeQL Pull request alerts When code scanning is “connected” with a static analysis tool like GitHub’s CodeQL, that’s when the magic happens, but we will get there in a moment. CodeQL is the static code analysis engine developed by GitHub to automate security checks. CodeQL performs semantic and dataflow analysis, “letting you query code as though it were data.” CodeQL’s learning curve at the start can be a little bit steep, but absolutely worth the effort, as its dataflow libraries allow for a solution to any kind of situation. Learning CodeQL If you are interested in learning more about the world of static analysis, with exercises and more, go ahead and follow @sylwia-budzynska’s CodeQL zero to hero series. You may also want to join GitHub Security Lab’s Slack instance to hang out with CodeQL engineers and the community. Creating the CodeQL workflow file GitHub engineers are doing a fantastic job on making CodeQL analysis available in a one-click fashion. However, to learn what’s going on behind the scenes (because we are researchers ), we are going to do the manual setup. Running CodeQL at scale In this case, we are using CodeQL on a per-repository basis. If you are interested in running CodeQL at scale to hunt zero day vulnerabilities and their variants across repositories, feel free to learn more about Multi-repository Variant Analysis. In fact, the Security Lab has done some work to run CodeQL on more than 1k repositories at once! In order to create the workflow file, follow these steps: Visit your fork For security and simplicity reasons, we are going to remove the existing GitHub Actions workflows so we do not run unwanted workflows. To do so, we are going to use github.dev (GitHub’s web-based editor). For such code changes, that don’t require reviews, rebuild, or testing, simply browse to /.github/workflows, press the . (dot) key once and a VS Code editor will pop-up in your browser. And push the changes: Enable GitHub Actions (optional) Head to the GitHub Actions tab and click on “I understand my workflows, go ahead and enable them.”Note that this might not appear if you deleted all workflows previously. Head to the Security tab Click on “Code Scanning” Click “Configure scanning tool” In CodeQL analysis, click “Set up” and then click “Advanced” Now, you are guided to GitHub’s UI file editor with a custom workflow file (whose source is located at actions/starter-workflows) for the CodeQL Action. You can notice it is fully customized for this repository by looking at the on.push.branches and strategy.matrix.language values. Actions documentation If you are not familiar with GitHub Actions, refer to the documentation to understand the basics of a workflow. At first glance, we can see that there’s an analyze job that will run for each language defined in the workflow. The analyze job will: Clone the repository Initialize CodeQL In this step, github/codeql-action/init will download the latest release of CodeQL, or CodeQL packs, that are not available locally. Autobuild The autobuild step will try to automatically build the code present in the workspace (step 1) in order to populate a database for later analysis. If it’s not a compiled language, it will just succeed and continue. Analyze The CodeQL binary will be called to finalize the CodeQL database and run queries on it, which may take a few minutes. Advanced configuration using Security Lab’s Community QL Packs With CodeQL’s default configuration (default workflow), you will already find impactful issues. Our CodeQL team makes sure that these default queries are designed to have a very low false positive rate so that developers can confidently add them to their CI/CD pipeline. However, if you are a security team like the GitHub Security Lab, you may prefer using a different set of audit models and queries that have a low false negative rate, or community-powered models customized for your specific target or methodology. With that in mind, we recently published our CodeQL Community Packs, and using it is as easy as a one-liner in your workflow file. As the README outlines, we just need to add a packs variable in the Initialize CodeQL step: - name: Initialize CodeQL uses: github/codeql-action/init@v2 with: languages: ${{ matrix.language }} packs: githubsecuritylab/codeql-${{ matrix.language }}-queries Once done, we are ready to save the file and browse the results! For more information on customizing the scan configuration, refer to the documentation. The bit I find most interesting is Using a custom configuration file. Browsing alerts A few minutes in, the results are shown in the Security tab; let’s dig in! Available filters for the repository alerts Anatomy of a code scanning alert While you may think that running CodeQL locally would be easier, code scanning provides additional built-in mechanisms to avoid duplicated alerts, prioritize, or dismiss them. Also, the amount of information given by a single alert page can save you a lot of time! Code scanning alert for deserialization of user-controlled data found by CodeQL In a few seconds, this view answers a few questions: what, where, when, and how. Even though we can see a few lines surrounding the sink, we need to see the whole flow to determine whether we want to pursue the exploitation further. For that, click Show paths. Code scanning alert for deserialization of user-controlled data found by CodeQL In this view, we can see that the flow of the vulnerability begins from a user-controllable node (in CodeQL-fu, RemoteFlowSource), which flows without sanitizers to a known PyYaml’s sink. Digging into the alert Looking at the alert page and the flow paths alone isn’t enough information to guess whether this will be exploitable. While new_config is clearly something we could control, we don’t know the specifics of the Loader that yaml.load is using. A custom Loader can inherit quite a few kinds of Loaders, so we need to make sure that the inherited Loader allows for custom constructors. def load_config_with_no_duplicates(raw_config) -> dict: """Get config ensuring duplicate keys are not allowed.""" class PreserveDuplicatesLoader(yaml.loader.Loader): pass ... return yaml.load(raw_config, PreserveDuplicatesLoader) However, we know CodeQL uses dataflow for its queries, so it should already have checked the Loader type, right? The community helps CodeQL get better When we were writing the post about Frigate’s audit, we came across a new alert for the vulnerability we had just helped fix! Our fix suggestion was to change the Loader from yaml.loader.Loader to yaml.loader.SafeLoader, but it turns out that although CodeQL was accounting for a few known safe loaders, it was not accounting for classes inheriting these. Due to this, code scanning didn’t close the alert we reported. The world of security is huge and evolving everyday. That is, supporting every source, sanitizer, and sink that exists for each one of the queries is impossible. Security requires collaboration between developers and security experts, and we encourage everyone who uses CodeQL to collaborate in any of the following forms to bring back to the community: Report the False Positives in github/codeql: CodeQL engineers and members of the community are actively monitoring these. When we came across the false positive explained before, we opened github/codeql#14685. Suggest new models for the Security Lab’s CodeQL Community Packs: Whether you’re inclined to contribute by crafting a pull request introducing novel models or queries or by opening an Issue to share your model or query concepts, you are already having a huge impact on the research community. Furthermore, the repository is also monitored by CodeQL engineers, so your suggestion might make it to the main repository impacting a huge amount of users and enterprises. Your engagement is more impactful than you might think. CodeQL model editor If you are interested in learning about supporting new dependencies with CodeQL, please see the CodeQL model editor. The model editor is designed to help you model external dependencies of your codebase that are not supported by the standard CodeQL Libraries. Now that we are sure about the exploitability of the issue, we can move on to the exploitation phase. GitHub Codespaces Codespaces is GitHub’s solution for cloud, instant and customizable development environments based on Visual Studio Code. In this post, we will be using Codespaces as our exploitation environment due to its safe (isolated) and ephemeral nature, as we are one click away from creating and deleting a codespace. Although this feature has its own billing, we will be using the free 120 core hours per month. Creating a codespace I wasn’t kidding when I said “we are one click away from creating and deleting a codespace”—simply go to “Code” and click “Create codespace on dev.” Fortunately for us, Frigate maintainers have helpfully developed a custom devcontainer configuration for seamless integration with VSCode (and so, Codespaces). Customizing devcontainer configuration For more information about .devcontainer customization, refer to the documentation. Once loaded, I suggest you close the current browser tab and instead connect to the Codespaces using VSCode along with the Remote Explorer extension. With that set up, we have a fully integrated environment with built-in port forwarding. Set up for debugging and exploitation When performing security research, having a full setup ready for debugging can be a game changer. In most cases, exploiting the vulnerability requires analyzing how the application processes and reacts to your interactions, which can be impossible without debugging. Debugging Right after creating the codespace we can see that it failed: Build error Given that there is an extensive devcontainer configuration, we can guess that it was not made for Codespaces, but for a local VSCode installation not meant to be used in the cloud. Clicking “View Creation Log” helps us find out that Docker is trying to find a non-existing device: ERROR: for frigate-devcontainer - Cannot start service devcontainer: error gathering device information while adding custom device "/dev/bus/usb": no such file or directory We need to head to the docker-compose.yml file (/workspaces/frigate/docker-compose.yml) and comment the following out: The devices property The deploy property The /dev/bus/usb volume Afterwards, we go to /workspaces/frigate/.devcontainer/post_create.sh and remove lines 5-9. After the change, we can successfully rebuild the container: Rebuilding the container Once rebuilt, we can see 6 ports in the port forwarding section. However, Frigate API, the one we are targeting through nginx, is not active. To solve that, we can start debugging by heading to the “Run and Debug” (left) panel and click the green (play-like) button to start debugging Frigate. Exploitation The built-in port forwarding feature allows us to use network-related software like Burp Suite or Caido right from our native host, so we can send the following request: POST /api/config/save HTTP/1.1 Host: 127.0.0.1:53128 Content-Length: 50 !!python/object/apply:os.popen - touch /tmp/pwned Using the debugging setup, we can analyze how new_config flows to yaml.load and creates the /tmp/pwned file. Now that we have a valid exploit to prove the vulnerability, we are ready to report it to the project. Private vulnerability reporting Reporting vulnerabilities in open source projects has never been an easy subject for many reasons: finding a private way of communicating with maintainers, getting their reply, and agreeing on so-many topics that a vulnerability covers is quite challenging on a text-based channel. That is what private vulnerability reporting (PVR) solves: a single, private, interactive place in which security researchers and maintainers work together to make their software more secure, and their dependent consumers more aware. Closing the loop Published advisories resulting from private vulnerability reports can be included in the GitHub Advisory Database to automatically disclose your report to end users using Dependabot! Note that GitHub has chosen to introduce this feature in an opt-in manner, aligning with our developer-first philosophy. This approach grants project maintainers the autonomy to decide whether they wish to participate in this reporting experience. That said, tell your favorite maintainers to enable PVR! You can find inspiration in the issues we open when we can’t find a secure and private way of reporting a vulnerability. Sending the report Once we validated the vulnerability and built a proof of concept (PoC), we can use private vulnerability reporting to privately communicate with Frigate maintainers. This feature allows for special values like affected products, custom CVSS severity, linking a CWE and assigning credits with defined roles, ensuring precise documentation and proper recognition, crucial for a collaborative and effective security community. Once reported, it allows for both ends (reporter and maintainer) to collaborate on a chat, and code together in a temporary private fork. On the maintainer side, they are one click away from requesting a CVE, which generally takes just two days to get created. For more information on PVR, refer to the documentation. Example of a published report GitHub and security research In today’s tech-driven environment, GitHub serves as a valuable resource for security researchers. With tools such as code scanning, Codespaces, and private vulnerability reporting seamlessly integrated into the platform, researchers can effectively identify and address vulnerabilities end to end. This comprehensive strategy not only makes research easier but also enhances the global cybersecurity community. By offering a secure, collaborative, and efficient platform to spot and tackle potential threats, GitHub empowers both seasoned security professionals and aspiring researchers. It’s the go-to destination for boosting security and keeping up with the constantly changing threat landscape. Happy coding and research! GitHub Security Lab’s mission is to inspire and enable the community to secure the open source software we all depend on. Learn more about their work.
  13. Begin your MLOps journey with these comprehensive free resources available on GitHub.View the full article
  14. AI has become an integral part of my workflow these days, and with the assistance of GitHub Copilot, I move a lot faster when I’m building a project. Having used AI tools to increase my productivity over the past year, I’ve realized that similar to learning how to use a new framework or library, we can enhance our efficiency with AI tools by learning how to best use them. In this blog post, I’ll share some of the daily things I do to get the most out of GitHub Copilot. I hope these tips will help you become a more efficient and productive user of the AI assistant. Beyond code completion To make full use of the power of GitHub Copilot, it’s important to understand its capabilities. GitHub Copilot is developing rapidly, and new features are being added all the time. It’s no longer just a code completion tool in your editor—it now includes a chat interface that you can use in your IDE, a command line tool via a GitHub CLI extension, a summary tool in your pull requests, a helper tool in your terminals, and much, much more. In a recent blog post, I’ve listed some of the ways you didn’t know you could use GitHub Copilot. This will give you a great overview of how much the AI assistant can currently do. But beyond interacting with GitHub Copilot, how do you help it give you better answers? Well, the answer to that needs a bit more context. Context, context, context If you understand Large Language Models ( LLMs), you will know that they are designed to make predictions based on the context provided. This means, the more contextually rich our input or prompt is, the better the prediction or output will be. As such, learning to provide as much context as possible is key when interacting with GitHub Copilot, especially with the code completion feature. Unlike ChatGPT where you need to provide all the data to the model in the prompt window, by installing GitHub Copilot in your editor, the assistant is able to infer context from the code you’re working on. It then uses that context to provide code suggestions. We already know this, but what else can we do to give it additional context? I want to share a few essential tips with you to provide GitHub Copilot with more context in your editor to get the most relevant and useful code out of it: 1. Open your relevant files Having your files open provides GitHub Copilot with context. When you have additional files open, it will help to inform the suggestion that is returned. Remember, if a file is closed, GitHub Copilot cannot see the file’s content in your editor, which means it cannot get the context from those closed files. GitHub Copilot looks at the current open files in your editor to analyze the context, create a prompt that gets sent to the server, and return an appropriate suggestion. Have a few files open in your editor to give GitHub Copilot a bigger picture of your project. You can also use #editor in the chat interface to provide GitHub Copilot with additional context on your currently opened files in Visual Studio Code (VS Code) and Visual Studio. https://github.blog/wp-content/uploads/2024/03/01_editor_command_open_files.mp4 Remember to close unneeded files when context switching or moving on to the next task. 2. Provide a top-level comment Just as you would give a brief, high-level introduction to a coworker, a top-level comment in the file you’re working in can help GitHub Copilot understand the overall context of the pieces you will be creating—especially if you want your AI assistant to generate the boilerplate code for you to get going. Be sure to include details about what you need and provide a good description so it has as much information as possible. This will help to guide GitHub Copilot to give better suggestions, and give it a goal on what to work on. Having examples, especially when processing data or manipulation strings, helps quite a bit. 3. Set Includes and references It’s best to manually set the includes/imports or module references you need for your work, particularly if you’re working with a specific version of a package. GitHub Copilot will make suggestions, but you know what dependencies you want to use. This can also help to let GitHub Copilot know what frameworks, libraries, and their versions you’d like it to use when crafting suggestions. This can be helpful to jump start GitHub Copilot to a newer library version when it defaults to providing older code suggestions. https://github.blog/wp-content/uploads/2024/03/03_includes_references.mp4 4. Meaningful names matter The name of your variables and functions matter. If you have a function named foo or bar, GitHub Copilot will not be able to give you the best completion because it isn’t able to infer intent from the names. Just as the function name fetchData() won’t mean much to a coworker (or you after a few months), fetchData() won’t mean much to GitHub Copilot either. Implementing good coding practices will help you get the most value from GitHub Copilot. While GitHub Copilot helps you code and iterate faster, remember the old rule of programming still applies: garbage in, garbage out. 5. Provide specific and well- scoped function comments Commenting your code helps you get very specific, targeted suggestions. A function name can only be so descriptive without being overly long, so function comments can help fill in details that GitHub Copilot might need to know. One of the neat features about GitHub Copilot is that it can determine the correct comment syntax that is typically used in your programming language for function / method comments and will help create them for you based on what the code does. Adding more detail to these as the first change you do then helps GitHub Copilot determine what you would like to do in code and how to interact with that function. Remember: Single, specific, short comments help GitHub Copilot provide better context. https://github.blog/wp-content/uploads/2024/03/05_simple_specific_short.mp4 6. Provide sample code Providing sample code to GitHub Copilot will help it determine what you’re looking for. This helps to ground the model and provide it with even more context. It also helps GitHub Copilot generate suggestions that match the language and tasks you want to achieve, and return suggestions based on your current coding standards and practices. Unit tests provide one level of sample code at the individual function/method level, but you can also provide code examples in your project showing how to do things end to end. The cool thing about using GitHub Copilot long-term is that it nudges us to do a lot of the good coding practices we should’ve been doing all along. Learn more about providing context to GitHub Copilot by watching this Youtube video: Inline Chat with GitHub Copilot Inline chat Outside of providing enough context, there are some built-in features of GitHub Copilot that you may not be taking advantage of. Inline chat, for example, gives you an opportunity to almost chat with GitHub Copilot between your lines of code. By pressing CMD + I (CTRL + I on Windows) you’ll have Copilot right there to ask questions. This is a bit more convenient for quick fixes instead of opening up GitHub Copilot Chat’s side panel. https://github.blog/wp-content/uploads/2024/03/07_a_inline_chat_animated.mp4 This experience provides you with code diffs inline, which is awesome. There are also special slash commands available like creating documentation with just the slash of a button! Tips and tricks with GitHub Copilot Chat GitHub Copilot Chat provides an experience in your editor where you can have a conversation with the AI assistant. You can improve this experience by using built-in features to make the most out of it. 8. Remove irrelevant requests For example, did you know that you can delete a previously asked question in the chat interface to remove it from the indexed conversation? Especially if it is no longer relevant? Doing this will improve the flow of conversation and give GitHub Copilot only the necessary information needed to provide you with the best output. 9. Navigate through your conversation Another tip I found is to use the up and down arrows to navigate through your conversation with GitHub Copilot Chat. I found myself scrolling through the chat interface to find that last question I asked, then discovered I can just use my keyboard arrows just like in the terminal! https://github.blog/wp-content/uploads/2024/03/09_up_down_arrows_animated.mp4 10. Use the @workspace agent If you’re using VS Code or Visual Studio, remember that agents are available to help you go even further. The @workspace agent for example, is aware of your entire workspace and can answer questions related to it. As such, it can provide even more context when trying to get a good output from GitHub Copilot. https://github.blog/wp-content/uploads/2024/03/10_workspace_agent.mp4 11. Highlight relevant code Another great tip when using GitHub Copilot Chat is to highlight relevant code in your files before asking it questions. This will help to give targeted suggestions and just provides the assistant with more context into what you need help with. 12. Organize your conversations with threads You can have multiple ongoing conversations with GitHub Copilot Chat on different topics by isolating your conversations with threads. We’ve provided a convenient way for you to start new conversations (thread) by clicking the + sign on the chat interface. 13. Slash Commands for common tasks Slash commands are awesome, and there are quite a few of them. We have commands to help you explain code, fix code, create a new notebook, write tests, and many more. They are just shortcuts to common prompts that we’ve found to be particularly helpful in day-to-day development from our own internal usage. Command Description Usage /explain Get code explanations Open file with code or highlight code you want explained and type: /explain what is the fetchPrediction method? /fix Receive a proposed fix for the problems in the selected code Highlight problematic code and type: /fix propose a fix for the problems in fetchAirports route /tests Generate unit tests for selected code Open file with code or highlight code you want tests for and type: /tests /help Get help on using Copilot Chat Type: /help what can you do? /clear Clear current conversation Type: /clear /doc Add a documentation comment Highlight code and type: /doc You can also press CMD+I in your editor and type /doc/ inline /generate Generate code to answer your question Type: /generate code that validates a phone number /optimize Analyze and improve running time of the selected code Highlight code and type: /optimize fetchPrediction method /clear Clear current chat Type: /clear /new Scaffold code for a new workspace Type: /new create a new django app /simplify Simplify the selected code Highlight code and type: /simplify /feedback Provide feedback to the team Type: /feedback See the following image for commands available in VS Code: 14. Attach relevant files for reference In Visual Studio and VS Code, you can attach relevant files for GitHub Copilot Chat to reference by using #file. This scopes GitHub Copilot to a particular context in your code base and provides you with a much better outcome. To reference a file, type # in the comment box, choose #file and you will see a popup where you can choose your file. You can also type #file_name.py in the comment box. See below for an example: https://github.blog/wp-content/uploads/2024/03/14_attach_filename.mp4 15. Start with GitHub Copilot Chat for faster debugging These days whenever I need to debug some code, I turn to GitHub Copilot Chat first. Most recently, I was implementing a decision tree and performed a k-fold cross-validation. I kept getting the incorrect accuracy scores and couldn’t figure out why. I turned to GitHub Copilot Chat for some assistance and it turns out I wasn’t using my training data set (X_train, y_train), even though I thought I was: I'm catching up on my AI/ML studies today. I had to implement a DecisionTree and use the cross_val_score method to evaluate the model's accuracy score. I couldn't figure out why the incorrect values for the accuracy scores were being returned, so I turned to Chat for some help pic.twitter.com/xn2ctMjAnr — Kedasha is learning about AI + ML (@itsthatladydev) March 23, 2024 I figured this out a lot faster than I would’ve with external resources. I want to encourage you to start with GitHub Copilot Chat in your editor to get debugging help faster instead of going to external resources first. Follow my example above by explaining the problem, pasting the problematic code, and asking for help. You can also highlight the problematic code in your editor and use the /fix command in the chat interface. Be on the lookout for sparkles! In VS Code, you can quickly get help from GitHub Copilot by looking out for “magic sparkles.” For example, in the commit comment section, clicking the magic sparkles will help you generate a commit message with the help of AI. You can also find magic sparkles inline in your editor as you’re working for a quick way to access GitHub Copilot inline chat. https://github.blog/wp-content/uploads/2024/03/15_magic_sparkles.mp4 Pressing them will use AI to help you fill out the data and more magic sparkles are being added where we find other places for GitHub Copilot to help in your day-to-day coding experience. Know where your AI assistant shines To get the best and most out of the tool, remember that context and prompt crafting is essential to keep in mind. Understanding where the tool shines best is also important. Some of the things GitHub Copilot is very good at include boilerplate code and scaffolding, writing unit tests, writing documentation, pattern matching, explaining uncommon or confusing syntax, cron jobs, and regex, and helping you remember things you’ve forgotten and debugging. But never forget that you are in control, and GitHub Copilot is here as just that, your copilot. It is a tool that can help you write code faster, and it’s up to you to decide how to best use it. It is not here to do your work for you or to write everything for you. It will guide you and nudge you in the right direction just as a coworker would if you asked them questions or for guidance on a particular issue. I hope these tips and best practices were helpful. You can significantly improve your coding efficiency and output by properly leveraging GitHub Copilot. Learn more about how GitHub Copilot works by reading Inside GitHub: Working with the LLMs behind GitHub Copilot and Customizing and fine-tuning LLMs: What you need to know. Harness the power of GitHub Copilot. Learn more or get started now.
  15. GitHub, the number 1 hub for developers worldwide, has become an integral platform in modern software development. What started out as a tool for easier collaboration among a few coders has now grown into an ecosystem powering millions of projects spanning from open source to enterprise. However, despite its widespread adoption, understanding the inner workings of GitHub can be a challenge for newcomers; one question often arises - how does the innovative platform actually work under the hood? In this article about how GitHub works, we'll navigate through GitHub's core components, shedding light on its primary functions. From version control to issue tracking and collaboration tools, we'll explore how GitHub empowers software teams to streamline their workflows and enhance productivity. Key Takeaways GitHub plays a crucial role in modern software development, serving as a central hub for collaboration and code management across various project scales.Core tools like repositories, commits, branches, and pull requests are fundamental for effective version control and team collaboration on GitHub.GitHub's workflow fosters streamlined processes, transparency, and continuous improvement in software development.What is GitHub?GitHub is a web-based platform that provides developers with a centralized hub for storing, sharing, and collaborating on code repositories. Founded in 2008, GitHub offers powerful version control features powered by Git, allowing developers to track changes to their codebase over time. It facilitates collaboration through features such as pull requests, code reviews, and issue tracking, enabling teams to work together efficiently. Now that we've defined GitHub, let’s examine the fundamental tools and functionalities that enable it to enhance developer workflows. To learn more about Git, check out the blog How Git Works. GitHub’s Main Features and ToolsIn this section, we delve into the backbone of GitHub: its essential management features and collaborative tools. Core Version Control FeaturesBelow are the powerful version control tools that form the cornerstone of GitHub's functionality: Repositories: These serve as centralized storage for your codebase, documentation, and more. They are not only backed up and easily shareable but also come with comprehensive access controls to manage who can view or edit the project.Commits: Think of commits as milestones in your project's timeline. Each commit is a snapshot of your project at a specific point, capturing changes and allowing you to trace the evolution of your code over time.Branches: Branches exist in parallel with the main project, providing a safe space for developers to work on new features or fix bugs without directly modifying the main code. Changes made in branches do not affect the main project code until they are merged.Pull Requests: The bridge for collaboration, pull requests enable team members to propose, discuss, and review changes from branches before they are merged into the main codebase. This fosters a culture of peer review and collective improvement.To learn more about Git Pull, check out this article: How to Force Git Pull to Overwrite Local Files? GitHub’s Tools for Team CollaborationBeyond code management functionality, GitHub also has the following integrated tools for improved team workflows: Issues: GitHub issues act as a versatile platform for tracking bugs, tasks, and enhancements. They link directly to the code, making it easier to tie discussions to specific project elements.Wikis: Every GitHub repository can have its wiki, allowing teams to create and maintain comprehensive documentation in a centralized location. This ensures that information remains accessible and up to date.Graphs and Pulse: These visualization tools offer insights into a project's activity, such as contribution patterns and progress over time. They provide a high-level overview, aiding in project management and team coordination.GitHub Pages: This feature lets users publish websites directly from their repositories. Ideal for project documentation, personal portfolios, or project showcases, GitHub Pages simplifies the process of taking your project to the web.The GitHub WorkflowGitHub workflow The GitHub workflow revolves around modifying a codebase housed in a repository, recording those modifications, and then proposing and integrating them into the main project. The typical process follows this general workflow: Set Up a RepositoryTo begin, developers create a new repository from scratch or fork an existing repository they have access to. Forking creates a copy of the project under the developer's GitHub account while cloning downloads the repository to the developer's local machine for editing. Make Changes and CommitWith the repository files available locally, developers can make necessary edits to the code, add or remove files as needed, and more. As changes are made, developers stage them for inclusion in the next commit. Each commit functions as a snapshot recording changes, accompanied by a commit message explaining the what and why of the alterations. Commits capture an evolutionary trail of incremental additions. Push Changes to GitHubThus far, changes exist only on the local clone. To synchronize with the remote repository on GitHub, developers use the git push command to upload recent commits. This ensures an up-to-date development version is centrally available for team members to access. Open Pull RequestsHere, developers propose that commits be merged back into the canonical project via pull requests. By opening pull requests, they request administrators or teammates to review the changes, provide feedback, approve, and finally integrate the commits. This vital checkpoint ensures the quality of the code. Repeat for New ChangesThe cycle repeats as developers create additional features, bug fixes, and more. Continued iteration with decentralized contributions, simultaneously merging, eventually shapes the software architecture. Collaborating on GitHubGitHub provides a suite of features for organizing collaboration around projects in productive and transparent ways: Issues and Project BoardsIssues provide threaded discussions centered on ideas, enhancements, bugs, or broader task management. Project boards visually track issues as cards sorted into progress columns. This gives a high-level view of the work remaining. Issues facilitate coordination from problem-solving to planning. Pull Requests and Code ReviewsAs outlined in the workflow, pull requests allow for proposed commits to be reviewed before integration. Teammates can provide feedback on code quality, suggest improvements, approve changes, and monitor progress through this crucial process before merging to ensure consistency. Organizations and TeamsFor larger-scale collaboration, organizations contain multiple repositories under one entity. Owners can manage permissions for members and divide them into teams with custom access levels to individual repositories or coding resources. Wikis and GitHub PagesTo organize institutional knowledge, wikis document processes, guidelines, meeting notes, etc., in one central location tied to the appropriate repositories. GitHub Pages enable effortless publishing of website resources related to projects. Notifications and Social FeaturesNotifications alert contributors to relevant activity like issue assignments, PR updates, comments requiring a response, etc. A news feed provides updates across the repositories someone follows. Social aspects streamline awareness. In these ways, GitHub provides robust tools to connect distributed teams working in tandem. Additional GitHub Capabilities Beyond its core features, GitHub continues to expand its ecosystem with the following specialized capabilities: GitHub Actions: GitHub Actions provides infrastructure for automating custom software workflows directly integrated with the repository. For example, developers can set up trigger events to run preset tasks like testing, building, and deploying code without manual intervention. Actions streamline DevOps pipelines.GitHub Packages: GitHub Packages allow users to store and distribute other software assets like Ruby gems or Docker containers. Teams can share these packages privately or publicly, just like coding repositories.GitHub Sponsors: The GitHub Sponsors program enables funding support for open-source project developers. Organizations or individual users can sponsor contributors financially to empower sustainable open-source maintenance from the community.Code Scanning: GitHub has integrated code scanning, which automatically scans code for security vulnerabilities and coding errors. This flags issues like credentials leak early on to prevent repositories compromises. Code scanning integrates with GitHub Actions so scans can be automatically triggered. ConclusionIn essence, GitHub has democratized software development, empowering developers to collaborate seamlessly, share knowledge, and collectively build better software. Its impact on the industry cannot be overstated, and it will likely continue to shape the future of software development practices for years to come. Enroll in our Git for Beginners course to learn and practice more Git concepts. View the full article
  16. This is abridged content from October 2023’s Insider newsletter. Like what you see? Sign up for the newsletter to receive complete, unabridged content in your inbox twice a month. Sign up now > Are you ready to unlock the secrets of organization, collaboration, and project magic? Buckle up, because we’ve got a handful of GitHub Projects tips and tricks that will turn you into a project management wizard! Keep reading for list of things you can do with GitHub Projects: 1. Manage your projects from the command line Some folks prefer to work in the terminal, and with the GitHub CLI ​​project command, you can manage and automate workflows from the command line. For example, you can create a new project board for your repository with a command like gh repo create-project. Then, you can add issues to this board using the gh issue create command, making it easy to manage and track your project’s progress from the command line. 2. Create reusable project templates If you often find yourself recreating projects with similar content and structure, you can set a project as a template when creating new projects. To set your project as a template, navigate to the project “Settings” page, and under the “Templates” section, toggle on Make template. This will turn the project into a template that can be used with the green Use this template button at the top of your project or when creating a new project. 3. Add issues from any organization If you’re an open source maintainer or a developer with multiple clients, you may be working across various organizations at a time. This also means you have multiple issues to keep track of, and GitHub Projects can help you collate issues from any organization onto a single project. You can do this in one of two ways: Copy the issue link from the organization and paste it into the project. Search for the organization and repository from the project using # and select the issues you want to add. 4. Edit multiple items at once Rather than spending time manually updating individual items, you can edit multiple items at once with the bulk editing feature. Let’s say you wanted to assign multiple issues to yourself. On the table layout, assign one issue, highlight and copy the contents of the cell, then select the remaining items you want to be assigned and paste the copied contents. And there you have it: you just assigned yourself to multiple issues at once. Check out this GIF for a visual representation: Want even more tips and tricks? Check out this blog post for 10 more GitHub Projects tips, or learn how we use GitHub Projects to standardize our workflows and stay aligned. You’re now equipped to work your magic with GitHub Projects! Want to receive content like this twice a month, right in your inbox? Sign up for the newsletter now >
  17. Starting today, code scanning autofix will be available in public beta for all GitHub Advanced Security customers. Powered by GitHub Copilot and CodeQL, code scanning autofix covers more than 90% of alert types in JavaScript, Typescript, Java, and Python, and delivers code suggestions shown to remediate more than two-thirds of found vulnerabilities with little or no editing. Found means fixed Our vision for application security is an environment where found means fixed. By prioritizing the developer experience in GitHub Advanced Security, we already help teams remediate 7x faster than traditional security tools. Code scanning autofix is the next leap forward, helping developers dramatically reduce time and effort spent on remediation. Even though applications remain a leading attack vector, most organizations admit to an ever-growing number of unremediated vulnerabilities that exist in production repositories. Code scanning autofix helps organizations slow the growth of this “application security debt” by making it easier for developers to fix vulnerabilities as they code. Just as GitHub Copilot relieves developers of tedious and repetitive tasks, code scanning autofix will help development teams reclaim time formerly spent on remediation. Security teams will also benefit from a reduced volume of everyday vulnerabilities, so they can focus on strategies to protect the business while keeping up with an accelerated pace of development. Want to try code scanning autofix? If your organization is new to GitHub or does not yet have GitHub Advanced Security (or, its prerequisite, GitHub Enterprise), contact us to request a demo and set up a free trial. How it works When a vulnerability is discovered in a supported language, fix suggestions will include a natural language explanation of the suggested fix, together with a preview of the code suggestion that the developer can accept, edit, or dismiss. In addition to changes to the current file, these code suggestions can include changes to multiple files and the dependencies that should be added to the project. Want to learn more about how we do it? Read Fixing security vulnerabilities with AI: A peek under the hood of code scanning autofix. Behind the scenes, code scanning autofix leverages the CodeQL engine and a combination of heuristics and GitHub Copilot APIs to generate code suggestions. To learn more about autofix and its data sources, capabilities, and limitations, please see About autofix for CodeQL code scanning. What’s next? We’ll continue to add support for more languages, with C# and Go coming next. We also encourage you to join the autofix feedback and resources discussion to share your experiences and help guide further improvements to the autofix experience. Together, we can help move application security closer to a place where a vulnerability found means a vulnerability fixed. Resources To help you learn more, GitHub has published extensive resources and documentation about the system architecture, data flow, and AI policies governing code scanning autofix. Changelog: Code scanning now suggests AI-powered autofixes for CodeQL alerts in pull request (beta) Engineering blog: Fixing security vulnerabilities with AI Documentation: About autofix for CodeQL code scanning Discussion: Autofix feedback and resources If you want to give code scanning autofix a try, but your organization is new to GitHub or does not yet have GitHub Advanced Security (or, its prerequisite, GitHub Enterprise), contact us to request a demo and set up a free trial.
  18. Learn how to automate machine learning training and evaluation using scikit-learn pipelines, GitHub Actions, and CML.View the full article
  19. Millions of secrets and authentication keys were leaked on GitHub in 2023, with the majority of developers not caring to revoke them even after being notified of the mishap, new research has claimed. A report from GitGuardian, a project that helps developers secure their software development with automated secrets detection and remediation, claims that in 2023, GitHub users accidentally exposed 12.8 million secrets in more than 3 million public repositories. These secrets include account passwords, API keys, TLS/SSL certificates, encryption keys, cloud service credentials, OAuth tokens, and similar. Slow response During the development stage, many IT pros would hardcode different authentication secrets in order to make their lives easier. However, they often forget to remove the secrets before publishing the code on GitHub. Thus, should any malicious actors discover these secrets, they would get easy access to private resources and services, which can result in data breaches and similar incidents. India was the country from which most leaks originated, followed by the United States, Brazil, China, France, and Canada. The vast majority of the leaks came from the IT industry (65.9%), followed by education (20.1%). The remaining 14% was split between science, retail, manufacturing, finance, public administration, healthcare, entertainment, and transport. Making a mistake and hardcoding secrets can happen to anyone - but what happens after is perhaps even more worrying. Just 2.6% of the secrets are revoked within the hour - practically everything else (91.6%) remains valid even after five days, when GitGuardian stops tracking their status. To make matters worse, the project sent 1.8 million emails to different developers and companies, warning them of its findings, and just 1.8% responded by removing the secrets from the code. Riot Games, GitHub, OpenAI, and AWS were listed as companies with the best response mechanisms. Via BleepingComputer More from TechRadar Pro GitHub's secret scanning feature is now even more powerful, covering AWS, Google, Microsoft, and moreHere's a list of the best firewalls around todayThese are the best endpoint security tools right now View the full article
  20. In February, we experienced two incidents that resulted in degraded performance across GitHub services. February 26 18:34 UTC (lasting 53 minutes) February 29 09:32 UTC (lasting 142 minutes) On February 26 and February 29, we had two incidents related to a background job service that caused processing delays to GitHub services. The incident on February 26 lasted for 63 minutes, while the incident on February 28 lasted for 142 minutes. The incident on February 26 was related to capacity constraints with our job queuing service and a failure of our automated failover system. Users experienced delays in Webhooks, GitHub Actions, and UI updates (for example, a delay in UI updates on pull requests). We mitigated the incident by manually failing over to our secondary cluster. No data was lost in the process. The incident on February 29 also caused processing delays to Webhooks, GitHub Actions and GitHub Issues services, with 95% of the delays occurring in a 22-minute window between 11:05 and 11:27 UTC. At 9:32 UTC, our automated failover successfully routed traffic, but an improper restoration to the primary at 10:32 UTC caused a significant increase in queued jobs until a correction was made at 11:21 UTC and healthy services began burning down the backlog until full restoration at 11:27 UTC. To prevent recurrence of the incidents in the short term, we have completed three significant improvements in the areas of better automation, increasing the reliability of our fallback process, and expanding the capacity of our background job queuing services based on these two incidents. For the longer term, we have a more significant effort already in progress to improve the overall scalability and reliability of our job processing platform. Please follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the GitHub Engineering Blog. The post GitHub Availability Report: February 2024 appeared first on The GitHub Blog. View the full article
  21. At GitHub, we use merge queue to merge hundreds of pull requests every day. Developing this feature and rolling it out internally did not happen overnight, but the journey was worth it—both because of how it has transformed the way we deploy changes to production at scale, but also how it has helped improve the velocity of customers too. Let’s take a look at how this feature was developed and how you can use it, too. Merge queue is generally available and is also now available on GitHub Enterprise Server! Find out more. Why we needed merge queue In 2020, engineers from across GitHub came together with a goal: improve the process for deploying and merging pull requests across the GitHub service, and specifically within our largest monorepo. This process was becoming overly complex to manage, required special GitHub-only logic in the codebase, and required developers to learn external tools, which meant the engineers developing for GitHub weren’t actually using GitHub in the same way as our customers. To understand how we got to this point in 2020, it’s important to look even further back. By 2016, nearly 1,000 pull requests were merging into our large monorepo every month. GitHub was growing both in the number of services deployed and in the number of changes shipping to those services. And because we deploy changes prior to merging them, we needed a more efficient way to group and deploy multiple pull requests at the same time. Our solution at this time was trains. A train was a special pull request that grouped together multiple pull requests (passengers) that would be tested, deployed, and eventually merged at the same time. A user (called a conductor) was responsible for handling most aspects of the process, such as starting a deployment of the train and handling conflicts that arose. Pipelines were added to help manage the rollout path. Both these systems (trains and pipelines) were only used on our largest monorepo and were implemented in our internal deployment system. Trains helped improve velocity at first, but over time started to negatively impact developer satisfaction and increase the time to land a pull request. Our internal Developer Experience (DX) team regularly polls our developers to learn about pain points to help inform where to invest in improvements. These surveys consistently rated deployment as the most painful part of the developer’s daily experience, highlighting the complexity and friction involved with building and shepherding trains in particular. This qualitative data was backed by our quantitative metrics. These showed a steady increase in the time it took from pull request to shipped code. Trains could also grow large, containing the changes of 15 pull requests. Large trains frequently “derailed” due to a deployment issue, conflicts, or the need for an engineer to remove their change. On painful occasions, developers could wait 8+ hours after joining a train for it to ship, only for it to be removed due to a conflict between two pull requests in the train. Trains were also not used on every repository, meaning the developer experience varied significantly between different services. This led to confusion when engineers moved between services or contributed to services they didn’t own, which is fairly frequent due to our inner source model. In short, our process was significantly impacting the productivity of our engineering teams—both in our large monorepo and service repositories. Building a better solution for us and eventually for customers By 2020, it was clear that our internal tools and processes for deploying and merging across our repositories were limiting our ability to land pull requests as often as we needed. Beyond just improving velocity, it became clear that our new solution needed to: Improve the developer experience of shipping. Engineers wanted to express two simple intents: “I want to ship this change” and “I want to shift to other work;” the system should handle the rest. Avoid having problematic pull requests impact everyone. Those causing conflicts or build failures should not impact all other pull requests waiting to merge. The throughput of the overall system should be favored over fairness to an individual pull request. Be consistent and as automated as possible across our services and repositories. Manual toil by engineers should be removed wherever possible. The merge queue project began as part of an overall effort within GitHub to improve availability and remove friction that was preventing developers from shipping at the frequency and level of quality that was needed. Initially, it was only focused on providing a solution for us, but was built with the expectation that it would eventually be made available to customers. By mid-2021, a few small, internal repositories started testing merge queue, but moving our large monorepo would not happen until the next year for a few reasons. For one, we could not stop deploying for days or weeks in order to swap systems. At every stage of the project we had to have a working system to ship changes. At a maximum, we could block deployments for an hour or so to run a test or transition. GitHub is remote-first and we have engineers throughout the world, so there are quieter times but never a free pass to take the system offline. Changing the way thousands of developers deploy and merge changes also requires lots of communication to ensure teams are able to maintain velocity throughout the transition. Training 1,000 engineers on a new system overnight is difficult, to say the least. By rolling out changes to the process in phases (and sometimes testing and rolling back changes early in the morning before most developers started working) we were able to slowly transition our large monorepo and all of our repositories responsible for production services onto merge queue by 2023. How we use merge queue today Merge queue has become the single entry point for shipping code changes at GitHub. It was designed and tested at scale, shipping 30,000+ pull requests with their associated 4.5 million CI runs, for GitHub.com before merge queue was made generally available. For GitHub and our “deploy the merge process,” merge queue dynamically forms groups of pull requests that are candidates for deployment, kicks off builds and tests via GitHub Actions, and ensures our main branch is never updated to a failing commit by enforcing branch protection rules. Pull requests in the queue that conflict with one another are automatically detected and removed, with the queue automatically re-forming groups as needed. Because merge queue is integrated into the pull request workflow (and does not require knowledge of special ChatOps commands, or use of labels or special syntax in comments to manage state), our developer experience is also greatly improved. Developers can add their pull request to the queue and, if they spot an issue with their change, leave the queue with a single click. We can now ship larger groups without the pitfalls and frictions of trains. Trains (our old system) previously limited our ability to deploy more than 15 changes at once, but now we can now safely deploy 30 or more if needed. Every month, over 500 engineers merge 2,500 pull requests into our large monorepo with merge queue, more than double the volume from a few years ago. The average wait time to ship a change has also been reduced by 33%. And it’s not just numbers that have improved. On one of our periodic developer satisfaction surveys, an engineer called merge queue “one of the best quality-of-life improvements to shipping changes that I’ve seen a GitHub!” It’s not a stretch to say that merge queue has transformed the way GitHub deploys changes to production at scale. How to get started Merge queue is available to public repositories on GitHub.com owned by organizations and to all repositories on GitHub Enterprise (Cloud or Server). To learn more about merge queue and how it can help velocity and developer satisfaction on your busiest repositories, see our blog post, GitHub merge queue is generally available. Interested in joining GitHub? Check out our open positions or learn more about our platform.
  22. GitHub Enterprise Server 3.12 is now generally available. With this version, customers can choose how to best scale their security strategy, gain more control over deployments, and so much more. Highlights of this version include: Restrict your deployment rollouts to select tag patterns in GitHub Actions Environments. Enforce which GitHub Actions workflows must pass with organization-wide repository rulesets. Automate pull request merges using merge queues, automating the process of validating and merging pull requests into a busy branch, ensuring the branch is never broken, reducing time to merge, and freeing up developers to work on their next tasks. Scale your security strategy with Dependabot alert rules. This public beta allows customers to choose how to respond to Dependabot alerts automatically by setting up custom auto-triage rules in their repository or organization. Enhance the security of your code with a public beta of secret scanning for non-provider patterns, and an update to code scanning’s default setup to support all CodeQL languages. GitHub Project templates are generally available at the organization level, allowing customers to share out and learn best practices in how to set up and use projects to plan and track their work. Updated global navigation to make using and finding information simpler, as well as improve accessibility and performance. Highlight text in markdown files with the alerts markdown extension, which provides five levels to use (note, tip, important, warning, and caution). Download GitHub Enterprise Server 3.12 now. For help upgrading, use the Upgrade Assistant to find the upgrade path from your current version of GitHub Enterprise Server (GHES) to this new version. More GitHub Actions features ensures your code is secure, correct, and compliant before you deploy Enjoy more control over your deployments by configuring tag patterns Using environments in GitHub Actions lets you configure your deployment environments with protection rules and secrets in order to better ensure secure deployments. As of today, tag patterns are now generally available. This capability makes it easy to specify selected tags or tag patterns on your protected environments in order to add an additional layer of security and control to your deployments. For example, you can now define that only “Releases/*” tags can be deployed to your production environment. Learn more about securing environments using deployment protection rules. Required workflows with repository rulesets are now generally available This feature makes it easy for teams to define and enforce standard CI/CD practices in the form of rulesets across multiple repositories within their organization without needing to configure individual repositories. For anyone using the legacy required workflows feature, your workflows will be automatically migrated to rulesets. With rulesets, it’s easier than ever for organizations to ensure their team’s code is secure, compliant, and correct before being deployed to production. Check out our documentation to learn more about requiring workflows with rulesets. Bringing automation to merge queue for more efficient collaboration Automate branch management Collaborative coding is essential for team productivity, but requires efficient branch management to avoid frustration and maintain velocity. Automated branch management, like merge queue, streamlines this process by ensuring compatibility, alerts developers of any issues, and allows teams to focus on coding without interruptions. With merge queue available in GHES, enterprises have a central platform for collaboration and the integrated tools for enterprise-level development. Simplify your pull request process by using merge queues today. Using GitHub Advanced Security to scale and enhance your security strategy Scale your security strategy with Dependabot alert rules With Dependabot, you can proactively manage security alerts to ensure high-priority items are surfaced. With user-configured alert rules, you can now tailor your security strategy to your specific risk tolerance and contextual needs, streamlining alert triage and remediation processes. GitHub offers suggested rulesets curated for all users, automatically filtering out false positives for public repositories and suggestions for private ones. Dependabot’s rules engine empowers developers to automatically manage alerts, from auto-dismissing to reopening based on customizable criteria. Stay ahead of vulnerabilities with Dependabot, supported by GitHub’s continuously improved vulnerability patterns. CodeQL supported languages can be set up automatically With this update, code scanning default setup will change how languages are analyzed in repositories. No longer will repositories need to manually select compiled languages for inclusion in the default setup configuration. Instead, the system will automatically attempt to analyze all CodeQL supported languages. The “edit configuration” page allows users to see which languages are included in each configuration and apply any customization that may be required. This feature will be available at both the repository and organization levels, guaranteeing the best setup for your repository. Expanded protection beyond patterns Secret scanning goes beyond provider patterns to detect critical security vulnerabilities like HTTP authentication headers, database connection strings, and private keys. Simply enable the “Scan for non-provider patterns” option in your repository or organization’s security settings to increase your defenses. With detected secrets conveniently categorized under a new “Other” tab on the alert list, you can ensure thorough protection for your most sensitive information. Stay ahead of threats and safeguard your data with our comprehensive secret scanning capabilities. New productivity enhancements to keep teams in the flow Make what needs to be noticed stand out Markdown serves as a fundamental tool. It is used for documentation, notes, comments, and decision records. GitHub is now taking it one step further with the addition of a Markdown extension to highlight text, signaling that certain information has different meaning than another. Searching is easier and more efficient We’ve introduced the redesigned global navigation for GitHub.com, featuring a suite of enhancements tailored to elevate user experience and efficiency. Our latest updates to GHES aim to streamline navigation, enhance accessibility, and boost performance. With improved wayfinding through breadcrumbs and easy access to essential repositories and teams from any location, navigating GitHub has never been more seamless. Create templates to simply project management Our latest feature update on GitHub Projects is designed to enhance project management streamlining project creation and foster collaboration within teams. With these updates, you can now swiftly create, share, and utilize project templates within your organizations, simplifying the process of starting new projects. Try it today To learn more about GitHub Enterprise Server 3.12, read the release notes or download it now. Not using GHES already? Start a free trial to innovate faster with the developer experience platform companies know and love.
  23. Implementing Continuous Integration/Continuous Deployment (CI/CD) for a Python application using Django involves several steps to automate testing and deployment processes. This guide will walk you through setting up a basic CI/CD pipeline using GitHub Actions, a popular CI/CD tool that integrates seamlessly with GitHub repositories. Step 1: Setting up Your Django Project Ensure your Django project is in a Git repository hosted on GitHub. This repository will be the basis for setting up your CI/CD pipeline. View the full article
  24. Companies and their structures are always evolving. Regardless of the reason, with people and information exchanging places, it’s easy for maintainership/ownership information about a repository to become outdated or unclear. Maintainers play a crucial role in guiding and stewarding a project, and knowing who they are is essential for efficient collaboration and decision-making. This information can be stored in the CODEOWNERS file but how can we ensure that it’s up to date? Let’s delve into why this matters and how the GitHub OSPO’s tool, cleanowners, can help maintainers achieve accurate ownership information for their projects. The importance of accurate maintainer information In any software project, having clear ownership guidelines is crucial for effective collaboration. Maintainers are responsible for reviewing contributions, merging changes, and guiding the project’s direction. Without clear ownership information, contributors may be unsure of who to reach out to for guidance or review. Imagine that you’ve discovered a high-risk security vulnerability and nobody is responding to your pull request to fix it, let alone coordinating that everyone across the company gets the patches needed for fixing it. This ambiguity can lead to delays and confusion, unfortunately teaching teams that it’s better to maintain control than to collaborate. These are not the outcomes we are hoping for as developers, so it’s important for us to consider how we can ensure active maintainership especially of our production components. CODEOWNERS files Solving this problem starts with documenting maintainers. A CODEOWNERS file, residing in the root of a repository, allows maintainers to specify individuals or teams who are responsible for reviewing and maintaining specific areas of the codebase. By defining ownership at the file or directory level, CODEOWNERS provides clarity on who is responsible for reviewing changes within each part of the project. CODEOWNERS not only streamlines the contribution process but also fosters transparency and accountability within the organization. Contributors know exactly who to contact for feedback, escalation, or approval, while maintainers can effectively distribute responsibilities and ensure that every part of the codebase has proper coverage. Ensuring clean and accurate CODEOWNERS files with cleanowners While CODEOWNERS is a powerful tool for managing ownership information, maintaining it manually can be tedious and easily-overlooked. To address this challenge, the GitHub OSPO developed cleanowners: a GitHub Action that automates the process of keeping CODEOWNERS files clean and up to date. If it detects that something needs to change, it will open a pull request so this problem gets addressed sooner rather than later. Here’s how cleanowners works: --- name: Weekly codeowners cleanup on: workflow_dispatch: schedule: - cron: '3 2 1 * *' permissions: issues: write jobs: cleanowners: name: cleanowners runs-on: ubuntu-latest steps: - name: Run cleanowners action uses: github/cleanowners@v1 env: GH_TOKEN: ${{ secrets.GH_TOKEN }} ORGANIZATION: <YOUR_ORGANIZATION_GOES_HERE> This workflow, triggered by scheduled runs, ensures that the CODEOWNERS file is cleaned automatically. By leveraging cleanowners, maintainers can rest assured that ownership information is accurate, or it will be brought to the attention of the team via an automatic pull request requesting an update to the file. Here is an example where @zkoppert and @no-longer-in-this-org used to both be maintainers, but @no-longer-in-this-org has left the company and no longer maintains this repository. Dive in With tools like cleanowners, the task of managing CODEOWNERS files becomes actively managed instead of ignored, allowing maintainers to focus on what matters most: building and nurturing thriving software projects. By embracing clear and accurate ownership documentation practices, software projects can continue to flourish, guided by clear ownership and collaboration principles. Check out the repository for more information on how to configure and set up the action.
  25. Research shows that developers complete tasks 55% faster at higher quality when using GitHub Copilot, helping businesses accelerate the pace of software development and deliver more value to their customers. We understand that adopting new technologies in your business involves thorough evaluation and gaining cross functional alignment. To jump start your organization’s entry into the AI era, we’ve partnered with engineering leaders at some of the most influential companies in the world to create a new expert-guided GitHub Learning Pathway. This prescriptive content will help organizational leaders understand: What can your business achieve using GitHub Copilot? How does GitHub Copilot handle data? What are the best practices for creating an AI governance policy? How can my team successfully roll out GitHub Copilot to our developers? Along the way, you’ll also get tips and insights from engineering leaders at ASOS, Lyft, Cisco, CARIAD (a Volkswagen Group company), and more who have used GitHub Copilot to increase operational efficiency, deliver innovative products faster, and improve developer happiness! Start your GitHub Copilot Learning Pathway Select your GitHub Learning Pathway NEW! AI-powered development with GitHub Copilot From measuring the potential impact of GitHub Copilot on your business to understanding the essential elements of a GitHub Copilot rollout, we’ll walk you through everything you need to find success with integrating AI into your businesses’ software development lifecycle. CI/CD with GitHub Actions From building your first CI/CD workflow with GitHub Actions to enterprise-scale automation, you’ll learn how teams at leading organizations unlock productivity, reduce toil, and boost developer happiness. Application Security with GitHub Advanced Security Protect your codebase without blocking developer productivity with GitHub Advanced Security. You’ll learn how to get started in just a few clicks and move on to customizing GitHub Advanced Security to meet your organization’s unique needs. Administration and Governance with GitHub Enterprise Configure GitHub Enterprise Cloud to prevent downstream maintenance burdens while promoting innersource, collaboration, and efficient organizational structures, no matter the size and scale of your organization. Learning Pathways are organized into three modules: Essentials modules introduce key concepts and build a solid foundation of understanding. Intermediate modules expand beyond the basics and detail best practices for success. Advanced modules offer a starting point for building deep expertise in your use of GitHub. We are hard at work developing the next GitHub Copilot Learning Pathway module, which will include a deep dive into the nitty-gritty of working alongside your new AI pair programmer. We’ll cover best practices for prompt engineering and using GitHub Copilot to write tests and refactor code, among other topics. Are you ready to take your GitHub skills to the next level? Get started with GitHub Learning Pathways today.
  • Forum Statistics

    63.6k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...