Search the Community
Showing results for tags 'supply chains'.
-
Authors/Presenters: Xueqiang Wang, Yifan Zhang, XiaoFeng Wang, Yan Jia, Luyi Xing Many thanks to USENIX for publishing their outstanding USENIX Security ’23 Presenter’s content, and the organizations strong commitment to Open Access. Originating from the conference’s events situated at the Anaheim Marriott; and via the organizations YouTube channel. Permalink The post USENIX Security ’23 – Union Under Duress: Understanding Hazards of Duplicate Resource Mismediation in Android Software Supply Chain appeared first on Security Boulevard. View the full article
-
- usenix
- supply chains
-
(and 1 more)
Tagged with:
-
Most businesses know that taking responsibility for their environmental and social impact is key for long-term success. But how can they make fully-informed decisions when most companies only have visibility into their immediate suppliers? At Prewave, we’re driven by the mission to help companies make their entire supply chains more resilient, transparent, and sustainable. Our end-to-end platform monitors and predicts a wide range of supply chain risks, and AI is the driving-force behind its success. Without AI, handling vast volumes of data and extracting meaningful insights from publicly-available information would be almost unfathomable at the scale that we do to help our clients. Because of that, Prewave needs a rock-solid technology foundation that is reliable, secure, and highly scalable to continually handle this demand. That’s why we built the Prewave supply chain risk intelligence platform on Google Cloud from inception in 2019. Back then, as a small team, we didn’t want to have to maintain hardware or infrastructure, and Google Cloud managed services stood out for providing reliability, availability, and security while freeing us up to develop our product and focus on Prewave’s mission. A shared concern for sustainability also influenced our decision, and we’re proud to be working with data centers with such a low carbon footprint. Tracking hundreds of thousands of suppliers Prewave’s end-to-end platform solves two key challenges for customers: First, it makes supply chains more resilient by identifying description risks and developing the appropriate mitigation plans. And second, it makes supply chains more sustainable by detecting and solving ESG risks, such as forced labor or environmental issues. It all starts with our Risk Monitoring capability, which uses AI that was developed by our co-founder Lisa in 2012 during her PhD research. With it, we’re scanning publicly available information in 120+ languages, looking for insights that can indicate early signals of Risk Events for our clients, such as labor unrest, an accident, fire, or 140 other different risk types that can disrupt their supply chain. Based on the resulting insights, clients can take actions on our platform to mitigate the risk, from filing an incident review to arranging an on-site audit. With this information, Prewave also maps our clients’ supply chains from immediate and sub-tier suppliers down to the raw materials’ providers. Having this level of granularity and transparency is now a requirement of new regulations such as the European corporate sustainability due diligence directive (CSDDD), but it can be challenging for our clients to do without help. They usually have hundreds or thousands of suppliers and our platform helps them to know each one, but also to focus attention, when needed, on those with the highest risk. The Prewave platform keeps effort on the supplier’s side as light as possible. They only have to act if potential risk is flagged by our Tier-N Monitoring capability, in which case, we support them to fix issues and raise their standards. Additionally, this level of visibility frees them up from having to manually answer hundreds of questionnaires in order to qualify to do business with more partners. To make all this possible, our engineering teams rely heavily on scalable technology such as Google Kubernetes Engine (GKE) to support our SaaS. We recently switched from Standard to Autopilot and noticed great results in time efficiency now that we don’t need to ensure that node pools are in place or that all CPU power available is being used appropriately, helping save up to 30% of resources. This also has helped us to reduce costs because we only pay for the deployments we run. We also believe that having the best tools in place is key to delivering the best experience not only to customers but also to our internal teams. So we use Cloud Build and Artifact Registry to experiment, build, and deploy artifacts and manage docker containers that we also use for GKE. Meanwhile, Cloud Armor acts as a firewall protecting us against denial of service and web attacks. Because scalability is key for our purposes, the application development and data science teams use Cloud SQL as a database. This is a fully managed service that helps us focus on developing our product, since we don’t have to worry about managing the servers according to demand. Data science teams also use Compute Engine to host our AI implementations as we develop and maintain our own models, and these systems are at the core of Prewave’s daily work. Helping more businesses improve their deep supply chains Since 2020, Prewave has grown from three clients to more than 170, our team of 10 grew to more than 160, and the company’s revenue growth multiplied by 100, achieving a significant milestone. We’ve also since then released many new features to our platform that required us to scale the product alongside scaling the company. With Google Cloud, this wasn’t an issue. We simply extended the resources that the new implementations needed, helping us to gain more visibility at the right time and win new customers. Because our foundation is highly stable and scalable, growing our business has been a smooth ride. Next, Prewave is continuing its expansion plans into Europe that began in 2023, before moving to new markets, such as the US. This is going well and our association with Google Cloud is helping us win the trust of early-stage clients who clearly also trust in its reliability and security. We’re confident that our collaboration with Google Cloud will continue to bring us huge benefits as we help more companies internationally to achieve transparency, resilience, sustainability, and legal compliance along their deep supply chains. View the full article
-
Vulnerability Overview Recently, NSFOCUS CERT detected that the security community disclosed a supply chain backdoor vulnerability in XZ-Utils (CVE-2024-3094), with a CVSS score of 10. Since the underlying layer of SSH relies on liblzma, when certain conditions are met, an attacker can use this vulnerability to bypass SSH authentication and gain unauthorized access on the […] The post XZ-Utils Supply Chain Backdoor Vulnerability Updated Advisory (CVE-2024-3094) appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks.. The post XZ-Utils Supply Chain Backdoor Vulnerability Updated Advisory (CVE-2024-3094) appeared first on Security Boulevard. View the full article
-
- backdoors
- supply chains
-
(and 1 more)
Tagged with:
-
The discovery of the backdoor in xz utils compression software last week has shone a spotlight on the threats to the digital supply chain. Wired has an excellent analysis on the attack, theorizing the years-long campaign may have been by the Russian foreign intelligence service (which was also behind the SUNBURST aka Solarwinds attack). Here […] The post XZ and the Threats to the Digital Supply Chain appeared first on Eclypsium | Supply Chain Security for the Modern Enterprise. The post XZ and the Threats to the Digital Supply Chain appeared first on Security Boulevard. View the full article
-
- threats
- supply chains
-
(and 1 more)
Tagged with:
-
AWS B2B Data Interchange now supports an expanded set of X12 transaction sets for finance, transportation, and supply chain use cases. These transaction sets add to the service’s existing support for dozens of other X12 transactions and extend the benefits of B2B Data Interchange’s fully automated, event-driven EDI transformation capabilities to new industry use cases. View the full article
- 1 reply
-
- b2b
- supply chains
-
(and 1 more)
Tagged with:
-
Security experts are sounding alarms about what some are calling the most sophisticated supply chain attack ever carried out on an open source project: a malicious backdoor planted in xz/liblzma (part of the xz-utils package), a popular open source compression tool. The post A software supply chain meltdown: What we know about the XZ Trojan appeared first on Security Boulevard. View the full article
-
CVE-2024-3094 is a reported supply chain compromise of the xz libraries. The resulting interference with sshd authentication could enable an attacker to gain unauthorized access to the system. Overview Malicious code was identified within the xz upstream tarballs, beginning with version 5.6.0. This malicious code is introduced through a sophisticated obfuscation technique during the liblzma […] The post Understanding and Mitigating the Fedora Rawhide Vulnerability (CVE-2024-3094) appeared first on OX Security. The post Understanding and Mitigating the Fedora Rawhide Vulnerability (CVE-2024-3094) appeared first on Security Boulevard. View the full article
-
Welcome to the ninth installment of IT Revolution’s series based on the book Investments Unlimited: A Novel about DevOps, Security, Audit Compliance, and Thriving in the Digital Age, written by Helen Beal, Bill Bensing, Jason Cox, Michael Edenzon, Tapabrata Pal, Caleb Queern, John Rzeszotarski, Andres Vega, and John Willis. In our last installment, complications mounted. Now, with the regulatory reckoning nearing and prototypes piled high, a supply chain cyberattack pulls Michelle’s engineers off-track again! In crisis talks, auditor Lucy prompts provocative ideas around “dependency” risks, but will impatient Omar or pragmatic Michelle connect these dots in time? Thursday, September 1st The next month flew by quickly. With the post-lift-and-shift Omega project in the rearview mirror and a renewed energy from their recent successes, Team Kraken was making significant progress with Turbo Eureka. The turbo of Turbo Eureka was really working! They had implemented more quality gates and really felt like they were on a roll. Michelle and Bill had just finished giving another demo session to a broader audience. Other people within IUI were becoming interested in what the team was doing. They were even using Turbo Eureka to automate the governance for all Turbo Eureka software they developed. Michelle sat down at her desk and looked at her calendar. It had now been six months since they had first received the MRIA. She opened her laptop and navigated to the MRIA Outline document in the MRIA Madness folder. She had been keeping track at a high level of Team Kraken’s accomplishments. Actionable Items: Based upon the MRAs issued, the following items should be addressed with formal standardized approaches: Goal: Define a minimally acceptable release approach Objectives: DONE: Enforce peer reviews of code that is pushed to a production environment. Identify and enforce minimum quality gates. DONE – Unit Tests DONE – Source Code Quality Analysis DONE – Static Application Security Test In Progress – Software Composition Analysis Backlogged: Remove all elevated access to all production environments for everyone. The team had even been able to add some nice features to the Git repo. They continued using the open-source project that created software for badges that were color coded to make it easy to visually understand the status and quality of the software in a Git repo. Michelle pulled up Team Kraken’s Git repo on her computer. In the middle of the screen, offset to the left a bit, she saw the badges. This repo had a badge with the status of every quality gate, showing whether the most recent quality gate passed or failed. The first badge simply showed the version of the software. The left part of the badge had a gray background with white writing that read Version. The right-hand side of the badge had a baby blue background color with gray text that said 0.0.4. Below the version badge, there was a badge that read Unit Test. The right-hand side of the badge had a green background with the word PASS in it followed by a check mark. There were a few more subsequent badges that showed green, just like the unit test badge. Then there was one that read Software Composition Analysis. The right-hand side of the badge was red with the word FAILED inside of it and a large X. “Omar, have a second?” Michelle hollered across the floor. Omar got up from his desk and walked over to Michelle, looking at her screen. “What’s up?” “Why is the software composition failing for this repo?” “Let’s see . . . what’s the latest commit number for main?” Michelle clicked around on the web browser and highlighted some text. “The most recent commit to main is 9349c9b.” “Okay, hold on to that number. Open another browser tab and go to https:// attestations.investmentsunlimitedbank.com.” Michelle typed the address in. “Okay, paste the commit number into that box,” Omar said, pointing to a box labeled Enter Commit Number on her screen. Michelle pasted the number into the box and clicked Find Attestation. A table popped up. It only had one row. She clicked View Attestation and the page refreshed, rendering all of the different attestation files. She scrolled down until she saw a title with Software Composition Analysis inside of it. There was a lot of information in this part of the file. She saw recognizable sections that were named High, Medium, and Low. Omar pointed at these sections. “Those are lists of common vulnerabilities and exposures—CVE for short. See, right there,” Omar said as he read the screen. “Look at this Critical: CVE-2021-44229. If you remember, by policy, there shouldn’t be any CVEs in the critical or high category.” “Do you know what that CVE is for?” Michelle asked. “No, we’d have to look it up. But on the bright side, it looks like Turbo Eureka is working!” “I guess it is,” Michelle replied. “Okay, thanks for the info.” Omar turned around and went back to his desk. Michelle scrolled through the attestations website a bit longer. She was impressed with all the information they were collecting and the controls they were implementing. A few hours later, Omar came up to Michelle. “I just saw in my Twitter feed that there’s a new Java vulnerability causing problems. Do you remember which CVE you saw this morning?” “Who posted on Twitter?” Michelle asked. “Look at this. It looks like some security firm,” Omar replied, showing her his phone. “But it’s really started to blow up on Twitter. Can you pull up the one that you asked me about this morning?” “Yeah, one sec.” Michelle spun around to her laptop and pulled up the attestations website again. “Oh, yeah! It’s the same one!” Michelle practically shouted. “Have you heard anything from Barry?” Omar asked. “Not a thing. Let me reach out and find out?” Michelle picked up her phone and started typing. Hey Barry, have you heard anything about this new critical CVE thing? Some Java vulnerability that’s blowing up on Twitter? Omar went back to his desk. Michelle waited for Barry’s response for some time, but eventually evening rolled around and she headed home. Maybe she’d find Barry tomorrow and ask him about it. “Mmm. I think the fruity flakes are my favorite,” Michelle said. “No way, Mom. The peanut butter and chocolate puffs has that beat any day.” One of Michelle’s sons was spooning ice cream into his mouth as fast as he could. “I’d have to agree,” her wife said. Michelle was with her twins and wife at Revolution Ice Cream. Her twins had melted ice cream running down their faces. She couldn’t help but notice most of it was on their shirts. Michelle’s phone pinged and vibrated. “Nope,” she said while fishing for her phone. “Fruity flakes all the way.” She found her phone at the bottom of her purse and looked at the text. Michelle, can you call me? it read. It was from Barry. It wasn’t common for him to text Michelle, especially after hours. Barry was an email person. He hadn’t even warmed up to chat. She got a bit tense. Out with the family. Wait until tomorrow? Michelle texted back. She knew what the answer would be, but this was her polite way of telling people to bug off. Me too. This can’t wait, Barry’s short reply said. “Oh no, not another one,” Michelle muttered. She started to get nightmares from the lift-and-shift Omega fiasco not so long ago. Was that acting up again? She and the team had done a significant refactor to the application. Then she remembered the CVE she had asked Barry about earlier that day. Her wife looked over. “Everything okay?” “Can you watch the kids for a second? I need to make a call. Something’s up at work, and I need to check in on it.” “Sure.” “Thanks,” Michelle said as she scooted out the door. “Michelle, we’re up shit creek without a paddle,” Barry said when he answered Michelle’s call. “Our Network Operations Center picked up some suspicious network traffic earlier today. Lucy called me a little earlier saying she got a text from the NOC to join a call. And I’m just now seeing your text. What’s going on? I’m worried it’s related to the chatter we’re hearing too.” Michelle replied, “I was thinking of asking you that. You’re the security guy. All I know is . . . ” Barry didn’t let her finish her sentence. “Michelle, I’ll have to call you back. We may have a problem here. Tim is pinging me. Bye.” Michelle’s heart sunk deep into her stomach, so much so that her ice cream almost made an encore. Michelle scooted back inside. “Grab some to-go lids for the kids. We need to cut this short,” she said to her wife. She stood up and reached for the twins’ ice cream cups. “Can you drive? I may need to take a few calls on the way,” Michelle added. Five minutes later, Michelle’s phone rang. It was Barry again. “What’s up?” she asked. “It’s bad. That network chatter is related to the critical CVE that you texted me about. Tim said the NOC is getting flooded with alerts. I have friends texting me from other companies. I’m about to hop on a call with our security consultants at AlertFirst to see what they know.” Barry paused. “What do you want me to do?” Michelle asked, hoping she wouldn’t have to get on her laptop tonight. “Stand by? After this call with our friends at AlertFirst I’ll know more,” Barry said. Over the next hour, Barry kept Michelle up to speed via a flurry of texts on their inter-office communication channel. According to AlertFirst, a security consultancy IUI hired for computer forensics and penetration testing among other things, what was happening at IUI was not an isolated incident. In fact, it was quickly turning into a global firestorm. Soon, news of the vulnerability was everywhere, even trending on social media. Companies around the world were scrambling. Technology teams from every sector were all focused on this one issue. It was as if someone had hit an international “pause” button on everything else in the tech space. As the night rolled on, it turned out the vulnerability was pervasive yet overwhelmingly elusive. Every time the IUI NOC team thought they had addressed every vulnerable application, a new one would surface or a third-party software would publish a new patch. All other work at IUI was suspended while they directed their efforts to eradicating the vulnerability. Friday, September 2nd “Michelle, I think our security consultants found something interesting,” Lucy said. Lucy was on IUI’s security team and was an expert on IUI’s central logging platform, which was used to collect any data that was created by applications and hardware under IUIs control. She was grinning from ear to ear. It was clear that Lucy was enjoying the challenge, unlike everyone else in the NOC and most of IUI’s technology team, who looked like they had just survived an harried expedition across a dense, dangerous jungle. “What’s up,” Michelle asked, worried it was going to be more bad news. “I was on the phone until late last night with our AlertFirst consultants,” Lucy answered, looked excited rather than bothered. “The NOC has kinda stopped the bleeding for now, but the patient’s still in serious condition. This is a critical vulnerability, as you know, and we still haven’t fixed it or found a cure yet. No one has. We’re all just on damage control.” Lucy continued, “Anyways, what I wanted to talk to you about is this, it seems this vulnerability is being caused by a specific Java library. Our AlertFirst consultants believe this as well, and we’re hoping you can help us confirm. Can you help us check our applications for this dependency? ” As Lucy was trying to pull up her notes from last night, Barry and Andrea came walking over to listen in on Lucy and Michelle’s conversation. Everyone at IUI was deeply involved in the incident. With the MRIA still over their heads, they couldn’t risk a breach. Michelle replied, “Hmm, that’s a pretty common dependency. I wish I had better news, Lucy, but it might take some time to find them all. Omar, can you cross-check our applications for this dependency?” Omar began pulling up Git repositories and everyone stood around awkwardly, wishing they had some way to help. But this was just going to take time. “So, while we wait, I’d love to understand this better. What is a ‘dependency?’” Andrea asked. Michelle turned to Andrea, “It’s an open-source library that we use in our applications.” “An open-source library?” “Oh, well, software engineering, at its heart, is simply writing code. Like instructions to bake a cake—well, a highly complex cake,” Michelle said, tapping her foot impatiently on the floor while staring over Omar’s shoulder. “But there are lots of ways to bake a cake, and there are lots of steps,” she continued. Andrea listened intently. “While we may write a lot of code ourselves, kind of like writing our own recipes, we save a lot of time and effort just grabbing bits from others. You know, why reinvent the wheel when the code is already out there in the open? You can use this ‘open’ software that is stored in public code repositories or you can use bundles of this software that we call ‘libraries.’ Our code depends on these libraries. These are our applications’ dependencies.” “Oh, so you’re using pieces of someone else’s recipe to bake your own cake?” “Yep,” Michelle replied. Everyone continued to watch as Omar went through Git repositories on his laptop. Every time he found an app using the dependency, he put a check mark by the app’s name on a sheet of paper, a list that was quickly growing longer and longer. Michelle asked, “Omar, can’t you look at just the Java repositories?” “Wish I could tell just by their names,” Omar replied. “So, when you open a repository and find it’s using Java, can you tell if it is using the dependency?” Michelle asked again. “I think I can tell if our code is directly using this dependency. But I couldn’t tell easily if it’s a transitive dependency,” Omar replied. “Transitive dependency?” Andrea asked. “Basically it’s like the recipe we borrowed had already borrowed a recipe, or bits of the recipe, from someone else,” Michelle said, a little frustrated. Andrea gave her a quizzical look. “Let’s say we want vanilla icing for our cake, but we don’t want to make it from scratch. So we get the premade stuff from the store. But the company that made that icing didn’t want to make their own vanilla flavoring from scratch; they bought it from somewhere else. That’s a transitive dependency.” “So if the vanilla used in the icing is bad, my cake will be bad, and I have no control over it?” Andrea asked. Lucy burst out, “Yup. It’s a software supply chain problem. Just like if your icing company bought vanilla flavoring that had been tainted or whatever.” Lucy looked like she was going to explode with excitement. She was the only one. Everyone else looked like they could use about a week’s worth of vacation time on a secluded beach. “You see, a supply chain is the entire production flow. It’s everything involved to deliver a product: the people, the materials, and the activities that produce a product like a vanilla frosted cake. The only difference here is that we’re not talking about a cake, we’re talking about software.” Andrea got excited for a minute. “This is fascinating.” “Really?” Omar quipped. Lucy continued, ignoring Omar and engaging directly with Andrea. “The problem is, when I go to a store to buy a cake or vanilla frosting, I’m not likely to ask who produced the vanilla flavoring. And it might not even be something the frosting company or grocery store readily knows!” “No,” responded Bill, “I wouldn’t even think to ask. I trust the store I’m buying from, therefore I trust they wouldn’t purchase any bad vanilla or software with bad dependencies.” “That’s a good point,” Lucy said. “Because you trust the store, you’re implicitly trusting the full supply chain of the cake. You’re trusting all the people, activities, and resources involved in that supply chain. ” “We probably do the same thing in software. If we trust the software vendor, we trust the whole supply chain of the software,” Bill commented. “I may be changing the topic a bit,” Andrea joined back in the discussion. “But I find developers trust open-source software more than vendor software.” “Isn’t that natural? I mean, everyone can see the code!” Omar added, still slowly adding apps to his list. Lucy responded with a question: “Are you sure about that? What do the rest of you think?” Everyone’s eyes darted around the semicircle, looking at each other. Michelle had this sense that Lucy’s question was loaded, but she couldn’t think of why. “That sounds like a trick question,” Andrea responded. “Omar’s right,” Bill blurted out. “Anyone can see the open-source code, anyone can review it, test it. On the other hand, you’re relying on a company that created the closed source. You don’t know what’s in their code.” “I agree,” Andrea said out loud. “Me too,” said Michelle. “Me three,” said Omar. “Well, I think you’re wrong. Open source is no better than closed source, nor is it any worse. You assumed that because it was open, it was being reviewed by many other people. But that’s just an assumption. You don’t know it to be true. This is an example of something called ‘diffusion of responsibility.’” Lucy did sound very academic. “Wait,” Omar paused his list making and spun around in his chair. “You’re telling me that even though open source is open, it’s no more likely to be safer than closed source?” “That’s exactly what I’m saying. That’s where diffusion of responsibility comes in. Diffusion of responsibility refers to a situation where as the number of bystanders increases, the personal responsibility that an individual bystander feels decreases. As a consequence, so does their individual tendency to help. So, for an open-source project, someone using that project assumes that the project’s team and other users are ensuring some level of quality. If all users of that project feel that way, effectively no one is actually reviewing anything,” Lucy said. “Lucy, it may be the hangryness or the lack of sleep, but I don’t see how this is getting back to that dependency issue we’re having,” Michelle responded heatedly. “Yes, the dependency. Sorry for the diversion, it’s just that this is so interesting.” Omar gave Michelle a pointed look before spinning back around to his laptop. Obviously he didn’t think so either. “We think the dependency that’s causing this problem is an open-source project, based on talks with our security firm. But we don’t know yet if it’s a software supply chain attack,” Lucy explained. “How can someone attack a supply chain?” Andrea asked. “Issues caused by someone in the supply chain can be unintentional or nefarious,” Lucy said. “For example, that vanilla we keep talking about. If the vanilla manufacturer didn’t properly clean their equipment or the production line had a bacteria issue, and then all of the cakes using that manufacturer’s vanilla could make people sick. This could affect people throughout the world, depending on who purchases that manufacturer’s vanilla and where they are. The same is true if there was a disgruntled employee. Someone with nefarious intentions could poison the vanilla, causing the same issues. The outcome is still a lot of sick people,” explained Lucy. “We think it’s the same with our software supply chain, like the dependency in question right now. It could be just a coding error or a bug that wasn’t caught before release. But if someone nefariously introduced some malicious code in the project, then we’d have a software supply chain attack,” said Lucy. “Ah, I get it now,” Andrea said. “Somehow there is a flaw with this dependency. This flaw has gone overlooked. Since everyone just trusted things were good, without validating, this flaw crept its way into our software. It reminds me of something my dad used to say: ‘The road to hell is paved with good intentions.’” “The only thing I don’t understand is how could we have prevented this?” Omar said, spinning around once more and handing Lucy the very long list of affected apps. “What could we have done better? Is this something we can check for when we build our own software?” “Perfect question!” Lucy replied. “And one I don’t have an answer to.” “Hey, gang!” someone shouted across the room. “The bosses got catering for a late lunch. It’s all set up in the NOC, so go get yourself a bite.” “Let’s get Michelle to the front of the line. She’s slowly transforming into a hangry monster,” Omar said, only half joking. Michelle quickly threw him a mean glance. Eureka! Together Michelle and Lucy crystallized the power of SBOMS to trace every software ingredient to its source! With Omar’s graph prototype, a new possibility arises to track assets enterprise-wide. Could this craft future-proof governance and help developers monitor risks proactively? Signs now suggest the Kraken’s on the cusp of a potential breakthrough! But bumpy political battles still surely await on the road toward reform! Join us next time for the continuation of the story. Or, go to your favorite book retailer and pick up a copy of Investments Unlimited today. The post Attack of the Supply Chains – Investments Unlimited Series: Chapter 9 appeared first on IT Revolution. View the full article
-
Starting today, administrators of package repositories can manage the configuration of multiple packages in one single place with the new AWS CodeArtifact package group configuration capability. A package group allows you to define how packages are updated by internal developers or from upstream repositories. You can now allow or block internal developers to publish packages or allow or block upstream updates for a group of packages. CodeArtifact is a fully managed package repository service that makes it easy for organizations to securely store and share software packages used for application development. You can use CodeArtifact with popular build tools and package managers such as NuGet, Maven, Gradle, npm, yarn, pip, twine, and the Swift Package Manager. CodeArtifact supports on-demand importing of packages from public repositories such as npmjs.com, maven.org, and pypi.org. This allows your organization’s developers to fetch all their packages from one single source of truth: your CodeArtifact repository. Simple applications routinely include dozens of packages. Large enterprise applications might have hundreds of dependencies. These packages help developers speed up the development and testing process by providing code that solves common programming challenges such as network access, cryptographic functions, or data format manipulation. These packages might be produced by other teams in your organization or maintained by third parties, such as open source projects. To minimize the risks of supply chain attacks, some organizations manually vet the packages that are available in internal repositories and the developers who are authorized to update these packages. There are three ways to update a package in a repository. Selected developers in your organization might push package updates. This is typically the case for your organization’s internal packages. Packages might also be imported from upstream repositories. An upstream repository might be another CodeArtifact repository, such as a company-wide source of approved packages or external public repositories offering popular open source packages. Here is a diagram showing different possibilities to expose a package to your developers. When managing a repository, it is crucial to define how packages can be downloaded and updated. Allowing package installation or updates from external upstream repositories exposes your organization to typosquatting or dependency confusion attacks, for example. Imagine a bad actor publishing a malicious version of a well-known package under a slightly different name. For example, instead of coffee-script, the malicious package is cofee-script, with only one “f.” When your repository is configured to allow retrieval from upstream external repositories, all it takes is a distracted developer working late at night to type npm install cofee-script instead of npm install coffee-script to inject malicious code into your systems. CodeArtifact defines three permissions for the three possible ways of updating a package. Administrators can allow or block installation and updates coming from internal publish commands, from an internal upstream repository, or from an external upstream repository. Until today, repository administrators had to manage these important security settings package by package. With today’s update, repository administrators can define these three security parameters for a group of packages at once. The packages are identified by their type, their namespace, and their name. This new capability operates at the domain level, not the repository level. It allows administrators to enforce a rule for a package group across all repositories in their domain. They don’t have to maintain package origin controls configuration in every repository. Let’s see in detail how it works Imagine that I manage an internal package repository with CodeArtifact and that I want to distribute only the versions of the AWS SDK for Python, also known as boto3, that have been vetted by my organization. I navigate to the CodeArtifact page in the AWS Management Console, and I create a python-aws repository that will serve vetted packages to internal developers. This creates a staging repository in addition to the repository I created. The external packages from pypi will first be staged in the pypi-store internal repository, where I will verify them before serving them to the python-aws repository. Here is where my developers will connect to download them. By default, when a developer authenticates against CodeArtifact and types pip install boto3, CodeArtifact downloads the packages from the public pypi repository, stages them on pypi-store, and copies them on python-aws. Now, imagine I want to block CodeArtifact from fetching package updates from the upstream external pypi repository. I want python-aws to only serve packages that I approved from my pypi-store internal repository. With the new capability that we released today, I can now apply this configuration for a group of packages. I navigate to my domain and select the Package Groups tab. Then, I select the Create Package Group button. I enter the Package group definition. This expression defines what packages are included in this group. Packages are identified using a combination of three components: package format, an optional namespace, and name. Here are a few examples of patterns that you can use for each of the allowed combinations: All package formats: /* A specific package format: /npm/* Package format and namespace prefix: /maven/com.amazon~ Package format and namespace: /npm/aws-amplify/* Package format, namespace, and name prefix: /npm/aws-amplify/ui~ Package format, namespace, and name: /maven/org.apache.logging.log4j/log4j-core$ I invite you to read the documentation to learn all the possibilities. In my example, there is no concept of namespace for Python packages, and I want the group to include all packages with names starting with boto3 coming from pypi. Therefore, I write /pypi//boto3~. Then, I define the security parameters for my package group. In this example, I don’t want my organization’s developers to publish updates. I also don’t want CodeArtifact to fetch new versions from the external upstream repositories. I want to authorize only package updates from my internal staging directory. I uncheck all Inherit from parent group boxes. I select Block for Publish and External upstream. I leave Allow on Internal upstream. Then, I select Create Package Group. Once defined, developers are unable to install different package versions than the ones authorized in the python-aws repository. When I, as a developer, try to install another version of the boto3 package, I receive an error message. This is expected because the newer version of the boto3 package is not available in the upstream staging repo, and there is block rule that prevents fetching packages or package updates from external upstream repositories. Similarly, let’s imagine your administrator wants to protect your organization from dependency substitution attacks. All your internal Python package names start with your company name (mycompany). The administrator wants to block developers for accidentally downloading from pypi.org packages that start with mycompany. Administrator creates a rule with the pattern /pypi//mycompany~ with publish=allow, external upstream=block, and internal upstream=block. With this configuration, internal developers or your CI/CD pipeline can publish those packages, but CodeArtifact will not import any packages from pypi.org that start with mycompany, such as mycompany.foo or mycompany.bar. This prevents dependency substitution attacks for these packages. Package groups are available in all AWS Regions where CodeArtifact is available, at no additional cost. It helps you to better control how packages and package updates land in your internal repositories. It helps to prevent various supply chain attacks, such as typosquatting or dependency confusion. It’s one additional configuration that you can add today into your infrastructure-as-code (IaC) tools to create and manage your CodeArtifact repositories. Go and configure your first package group today. -- sebView the full article
-
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures. Software supply chains (SSCs) have become a prevalent topic in the software development world, and for good reason. As software development has matured, so has our understanding of the dependencies that can affect the security and the legal standing of our products. We only have to hear names like Log4Shell to remember how crippling a single vulnerability can be. View the full article
-
A new report from CrowdStrike has found cyberattacks are getting faster, with breakout times down to an average of 62 minutes compared to an average of 84 minutes in 2023. 34 new threat actors have also joined the cyber scene, with a total of over 230 individual threat actors now tracked by the company. A new record breakout time was also recorded at just two minutes and seven seconds to jump from an infected host to another hose within the network. Hackers are following their targets into the cloud The report highlights the rapid increase in the speed of attacks and the use of AI assistance is “driving a tectonic shift in the security landscape and the world.” The human factor has increasingly become the main source of entry for threat actors, with interactive intrusions and hands-on-keyboard attacks increasing by 60%. Many threat actors have increased their use of social engineering and phishing campaigns to gain abusable credentials, and ultimately access to their target’s environment. As businesses continue their journey towards the cloud, threat actors have followed, with cloud intrusions increasing by 75% since last year. Threat actors are also seeking greater knowledge of the cloud itself, with the exploitation of cloud unique features experiencing a 110% increase. Threat actors are sowing further disruption by exploiting trusted relationships to compromise supply chains, allowing the actor to “cast a wide net” in its victim selection. CrowdStrike highlights successful attempts by the North Korean ‘Labyrinth Chollima’ to intrude trusted software as a delivery mechanism for data stealing malware. CrowdStrike also issues a warning to democracy as state-sponsored adversaries are highly likely to target critical upcoming elections. Russia, China, and Iran all have motivations to influence and disrupt elections and will likely launch disinformation campaigns that take advantage of geopolitical tensions and conflicts to influence voters and exacerbate societal fractures. Threat actors are stepping up their use of AI-generated content, including artificial images and video, to spread misinformation on social media. CrowdStrike expects increasing abuse of open-source or publicly available LLMs to continue, rather than threat-actors developing their own home-grown models. “Over the course of 2023, CrowdStrike observed unprecedented stealthy operations from brazen eCrime groups, sophisticated nation-state actors and hacktivists targeting businesses in every sector spanning the globe,” said Adam Meyers, head of Counter Adversary Operations, CrowdStrike. “Rapidly evolving adversary tradecraft honed in on both cloud and identity with unheard of speed, while threat groups continued to experiment with new technologies, like GenAI, to increase the success and tempo of their malicious operations. “To defeat relentless adversaries, organizations must embrace a platform-approach, fueled by threat intelligence and hunting, to protect identity, prioritize cloud protection, and give comprehensive visibility into areas of enterprise risk.” More from TechRadar Pro Technical debt and cloud issues are the biggest barriers to digital transformation for many companiesThese are the best cloud firewalls and best cloud backup servicesHere is our guide to the best endpoint protection software View the full article
-
As we announced at DockerCon, we’re now providing a free Docker Scout Team subscription to all Docker-Sponsored Open Source (DSOS) program participants. If your open source project participates in the DSOS program, you can start using Docker Scout today. If your open source project is not in the Docker-Sponsored Open Source program, you can check the requirements and apply. For other customers, Docker Scout is already generally available. Refer to the Docker Scout product page to learn more. Why use Docker Scout? Docker Scout is a software supply chain solution designed to make it easier for developers to identify and fix supply chain issues before they hit production. To do this, Docker Scout: Gives developers a centralized view of the tools they already use to see all the critical information they need across the software supply chain Makes clear recommendations on how to address those issues, including for security issues and opportunities to improve reliability efforts Provides automation that highlights new defects, failures, or issues Docker Scout allows you to prevent and address flaws where they start. By identifying issues earlier in the software development lifecycle and displaying information in Docker Desktop and the command line, Docker Scout reduces interruptions and rework. Supply chain security is a big focus in software development, with attention from enterprises and governments. Software is complex, and when security, reliability, and stability issues arise, they’re often the result of an upstream library. So developers don’t just need to address issues in the software they write but also in the software their software uses. These concerns apply just as much to open source projects as proprietary software. But the focus on improving the software supply chain results in an unfunded mandate for open source developers. A research study by the Linux Foundation found that almost 25% of respondents said the cost of security gaps was “high” or “very high.” Most open source projects don’t have the budget to address these gaps. With Docker Scout, we can reduce the burden on open source projects. Conclusion At Docker, we understand the importance of helping open source communities improve their software supply chain. We see this as a mutually beneficial relationship with the open source community. A well-managed supply chain doesn’t just help the projects that produce open source software; it helps downstream consumers through to the end user. For more information, refer to the Docker Scout documentation. Learn more Join our “Improving Software Supply Chain Security for Open Source Projects” webinar on Wednesday, February 7, 2024 at 1 PM Eastern (1700 UTC). Watch on LinkedIn or on the Riverside streaming platform. Try Docker Scout. Looking to get up and running? Use our Quickstart guide. Vote on what’s next! Check out the Docker Scout public roadmap. Have questions? The Docker community is here to help. Not a part of DSOS? Apply now. View the full article
-
- docker
- supply chains
-
(and 1 more)
Tagged with:
-
Starting today, demand planners can configure an auto recurring (i.e., perpetual) forecast at the desired scheduled frequency without any manual intervention by the demand planner. Previously, demand planners had to manually initiate a planning cycle. Now, demand planners as part of their forecast configuration setting can define the cadence at which the forecasts need to be generated and published to align with timeline needs, such as downstream supply and distribution planning. For example, a demand planner can set the forecast interval (weekly or monthly) as well as the specific day, time, and time zone. Based on the defined settings, the current planning cycle will be published to Amazon S3 and a new planning cycle will begin. View the full article
-
The continuous integration/continuous delivery (CI/CD) pipeline encompasses the internal processes and tools that accelerate software development and allow developers to release new features. However, many parts of the CI/CD pipeline are automated. That’s a good thing because it accelerates workflows and reduces development or testing time. However, it also exposes the pipeline to cyberattacks because the automation does not require continuous monitoring. Here are some things to do to keep the software supply chain secure by protecting the CI/CD pipeline. View the full article
-
We are excited to announce that Docker Scout General Availability (GA) now allows developers to continuously evaluate container images against a set of out-of-the-box policies, aligned with software supply chain best practices. These new capabilities also include a full suite of integrations enabling you to attain visibility from development into production. These updates strengthen Docker Scout’s position as integral to the software supply chain... View the full article
-
At the KubeCon + CloudNativeCon North America conference this week, JFrog announced it contributed the Pyrsia project, which uses blockchain technologies to secure software packages, to the Continuous Delivery (CD) Foundation. Stephen Chin, vice president of developer relations at JFrog and governing board member for the CD Foundation, said the goal is to increase the […] The post JFrog Gives Pyrsia to CD Foundation to Secure Software Supply Chains appeared first on DevOps.com. View the full article
-
JFrog today added a JFrog Advanced Security module to its Artifactory repository that enables DevOps teams to scan both binaries and source code for vulnerabilities and misconfigurations. Stephen Chin, vice president of developer relations for JFrog, said that approach will enable DevOps team to ensure applications are secure before they are deployed in a production […] The post JFrog Adds Module to Better Secure Software Supply Chains appeared first on DevOps.com. View the full article
-
More than a year after the massive SolarWinds cyberattack, targeted companies continue to feel its ramifications in both reputation and financial cost. Moreover, the global software supply chain remains vulnerable to severe attacks, whether from a hostile nation-state like Russia–now increasingly in the cybersecurity spotlight due to fears of retaliation due to U.S. sanctions–or from […] The post To Prevent Supply Chain Attacks, Build Secure Code appeared first on DevOps.com. View the full article
-
Scribe Security today unveiled a Scribe Integrity tool that scans software artifacts to make sure they comply with IT organizations’ security policies before they are integrated into an application. The Scribe Integrity tool authenticates open source and proprietary source code before it is uploaded into a build. It assumes that all artifacts are “guilty” until […] The post Scribe Security Unveils Pair of Tools to Secure Software Supply Chains appeared first on DevOps.com. View the full article
-
supply chains The Age of Software Supply Chain Disruption
Devops.com posted a topic in General Discussion
The software supply chain is swiftly becoming a widespread attack vector, and securing it is now in the spotlight. Software supply chain attacks have become a given in 2022, reports Darktrace. SolarWinds, Kaseya and GitLab are just a few examples of organizations that have been vulnerable to attack in recent years. We’ve also witnessed an increasing […] View the full article -
Many software projects are not prepared to build securely by default, which is why the Linux Foundation and Open Source Security Foundation (OpenSSF) partnered with technology industry leaders to create Sigstore, a set of tools and a standard for signing, verifying and protecting software. Sigstore is one of several innovative technologies that have emerged to improve the integrity of the software supply chain, reducing the friction developers face in implementing security within their daily work. To make it easier to use Sigstore’s toolkit to its full potential, OpenSSF and Linux Foundation Training & Certification are releasing a free online training course, Securing Your Software Supply Chain with Sigstore (LFS182x). This course is designed with end users of Sigstore tooling in mind: software developers, DevOps engineers, security engineers, software maintainers, and related roles. To make the best use of this course, you will need to be familiar with Linux terminals and using command line tools. You will also need to have intermediate knowledge of cloud computing and DevOps concepts, such as using and building containers and CI/CD systems like GitHub Actions, many of which can be learned through other free Linux Foundation Training & Certification courses. Upon completing this course, participants will be able to inform their organization’s security strategy and build software more securely by default. The hope is this will help you address attacks and vulnerabilities that can emerge at any step of the software supply chain, from writing to packaging and distributing software to end users. Enroll today and improve your organization’s software development cybersecurity best practices. The post Free Training Course Teaches How to Secure a Software Supply Chain with Sigstore appeared first on Linux Foundation. The post Free Training Course Teaches How to Secure a Software Supply Chain with Sigstore appeared first on Linux.com. View the full article
-
Forum Statistics
63.6k
Total Topics61.7k
Total Posts