Jump to content

Search the Community

Showing results for tags 'vulnerabilities'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. Overview In an effort to safeguard our customers, we perform proactive vulnerability research with the goal of identifying zero-day vulnerabilities that are likely to impact the security of leading organizations. Recently, we decided to take a look at Ant Media Server with the goal of identifying any vulnerabilities within the application. We performed testing against […] The post Local Privilege Escalation Vulnerability in Ant Media Server (CVE-2024-32656) appeared first on Praetorian. The post Local Privilege Escalation Vulnerability in Ant Media Server (CVE-2024-32656) appeared first on Security Boulevard. View the full article
  2. What is a cybersecurity vulnerability, how do they happen, and what can organizations do to avoid falling victim? Among the many cybersecurity pitfalls, snares, snags, and hazards, cybersecurity vulnerabilities and the likes of zero-day attacks are perhaps the most insidious. Our lives are unavoidably woven into the fabric of digital networks, and cybersecurity has become... The post Understanding Cybersecurity Vulnerabilities appeared first on TrueFort. The post Understanding Cybersecurity Vulnerabilities appeared first on Security Boulevard. View the full article
  3. The Ubuntu security team has recently rolled out critical security updates aimed at addressing several vulnerabilities identified in Squid, a widely used web proxy cache server. These vulnerabilities, if left unaddressed, could potentially expose systems to denial-of-service attacks. Let’s delve into the specifics of these vulnerabilities and understand their implications. Recent Squid Vulnerabilities Fixed […] The post Multiple Squid Vulnerabilities Fixed in Ubuntu appeared first on TuxCare. The post Multiple Squid Vulnerabilities Fixed in Ubuntu appeared first on Security Boulevard. View the full article
  4. A critical flaw has been discovered in the Rust standard library that could lead to serious command injection attacks against Windows users. The BatBadBut vulnerability, tracked as CVE-2024-24576, carries the highest possible CVSS score of 10.0, indicating the utmost severity. However, its impact is limited to scenarios where batch files are invoked on Windows systems […] The post BatBadBut Vulnerability Exposes Windows Systems To Attacks appeared first on TuxCare. The post BatBadBut Vulnerability Exposes Windows Systems To Attacks appeared first on Security Boulevard. View the full article
  5. Amazon Inspector now offers continuous monitoring of your Amazon EC2 instances for software vulnerabilities without installing an agent or additional software. Currently, Inspector leverages the widely deployed AWS Systems Manager (SSM) agent to assess your EC2 instances for third-party software vulnerabilities. With this expansion, Inspector now offers two scan modes for EC2 scanning, hybrid scan mode and agent-based scan mode. In hybrid scan mode, Inspector relies on SSM agents to collect information from instances to perform vulnerability assessments and automatically switches to agentless scanning for instances that do not have SSM agents installed or configured. For agentless scanning, Inspector takes snapshots of EBS volumes to collect software application inventory from the instances to perform vulnerability assessments. For agent-based scan mode, Inspector only scans instances that have a SSM agent installed and configured. New customers enabling EC2 scanning are configured in hybrid mode by default, while existing customers can migrate to hybrid mode by simply visiting the EC2 settings page within the Inspector console. Once enabled, Inspector automatically discovers all your EC2 instances and starts evaluating them for software vulnerabilities. View the full article
  6. Cloud technologies are a rapidly evolving landscape. Securing cloud applications is everyone’s responsibility, meaning application development teams are needed to follow strict security guidelines from the earliest development stages, and to make sure of continuous security scans throughout the whole application lifecycle. The rise of generative AI enables new innovative approaches for addressing longstanding challenges with reduced effort. This post showcases how engineering teams can automate efficient remediation of container CVEs (common vulnerabilities and exposures) early in their continuous integration (CI) pipeline. Using cloud services such as Amazon Bedrock, Amazon Inspector, AWS Lambda, and Amazon EventBridge you can architect an event-driven serverless solution for automatically addressing container vulnerabilities detection and patching. Using the power of generative AI and serverless technologies can help simplify what used to be a complex challenge. Overview The exponential growth of modern applications has enabled developers to build highly decoupled microservice-based architectures. However, the distributed nature of those architectures comes with a set of operational challenges. Engineering teams were always responsible for various security aspects of their application environments, such as network security, IAM permissions, TLS certificates, and code vulnerability scanning. Addressing these aspects at the scale of dozens and hundreds of microservices requires a high degree of automation. Automation is imperative for efficient scaling as well as maintaining control and governance. Running applications in containers is a common approach for building microservices. It allows developers to have the same CI pipeline for their applications, regardless of whether they use Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Elastic Container Service (Amazon ECS), or AWS Lambda to run it. No matter which programming language you use for your application, the deployable artifact is a container image that commonly includes application code and its dependencies. It is imperative for application development teams to scan those images for vulnerabilities to make sure of their safety prior to deploying them to cloud environments. Amazon Elastic Container Registry (Amazon ECR) is an OCI artifactory that provides two types of scanning, Basic and Enhanced, powered by the Amazon Inspector. The image scanning occurs after the container image is pushed to the registry. The basic scanning is triggered automatically when a new image is pushed, while the enhanced scanning runs continuously for images hosted in Amazon ECR. Both types of scans generate scan reports, but it is still the development team’s responsibility to act on it: read the report, understand the vulnerabilities, patch code, open a pull request, merge, and run CI again. The following steps illustrate how you can build an automated solution that uses the power of generative AI and event-driven serverless architectures to automate this process. The following sample solution uses the “in-context learning” approach, a technique that tailors AI responses to narrow scenarios. Used for CVE patching, the solution builds AI prompts based on the programming language in question and a previously generated example of what a PR might look like. This approach underscores a crucial point: for some narrow use cases, using a smaller Large Language Model (LLM), such as Llama 13B, with assisted prompt might yield equally effective results as a bigger LLM, such as Llama 2 70B. We recommend that you evaluate both few-shot prompts with smaller LLMs and zero-shot prompts with larger LLMs to find the model that works most efficiently for you. Read more about providing prompts and examples in the Amazon Bedrock documentation. Solution architecture Prior to packaging the application as a container, engineering teams should make sure that their CI pipeline includes steps such as static code scanning with tools such as SonarQube or Amazon CodeGuru, and image analysis tools such as Trivy or Docker Scout. Validating your code for vulnerabilities at this stage aligns with the shift-left mentality, and engineers should be able to detect and address potential threats in their code in the earliest stages of development. After packaging the new application code and pushing it to Amazon ECR, the image scanning with Amazon Inspector is triggered. Engineers can use languages supported by Amazon Inspector. As image scanning runs, Amazon Inspector emits EventBridge Finding events for each vulnerability detected. CI is triggered by a developer pushing new code to the shared code repository. This step is not implemented in the provided sample, and different engineering teams can use different tools for their CI pipeline. The application container image is built and pushed to the Amazon ECR. Amazon Inspector is triggered automatically. Note that you must first enable Amazon Inspector ECR enhanced scanning in your account. As Amazon Inspector scans the image, it emits findings in a format of events to EventBridge. Each finding generates a separate event. See the example JSON payload of a finding event in the Inspector documentation. EventBridge is configured to invoke a Lambda function for each finding event. Lambda is invoked for each finding. The function aggregates and updates the Amazon DynamoDB database table with each finding information. Once Amazon Inspector completes the scan, it emits the scan complete event to EventBridge, which calls the PR creation microservice hosted as an Amazon ECS Fargate Task to start the PR generation process. PR creation microservice clones the code repo to see the current dependencies list. Then it retrieves the aggregated findings data from DynamoDB, builds a prompt using the dependencies list, findings data, and in-context learning example based on previous scans. The microservice invokes Amazon Bedrock to generate a new PR content. Once the PR content is generated, the microservice opens a new PR and pushes changes upstream. Engineering teams validate the PR and merge it with code repository. Overtime, as engineering teams gain trust with the process, they might consider automating the merge part as well. Sample implementation Use the example project to replicate this solution in your AWS account. Follow the instructions in README.md for provisioning and testing the sample project using Hashicorp Terraform. Under the /apps directory of the sample project you should see two applications. The /apps/my-awesome-application intentionally contains a set of vulnerable dependencies. This application was used to create examples of what a PR should look like. Once the engineering team took this application through Amazon Inspector and Amazon Bedrock manually, a file containing this example was generated. See in_context_examples.py. Although it can be a one-time manual process, engineering teams can also periodically add more examples as they evolve and improve the generative AI model response. The /apps/my-amazing-application is the actual application that the engineering team works on delivering business value. They deploy this application several times a day to multiple environments, and they want to make sure that it doesn’t have vulnerabilities. Based on the in-context example created previously, they’re continuously using Amazon Inspector to detect new vulnerabilities, as well as Amazon Bedrock to automatically generate pull requests that patch those vulnerabilities. The following example shows a pull request generated when a member of the development team has introduced vulnerable dependencies. The pull request contains details about the packages with detected vulnerabilities and CVEs, as well as recommendations for how to patch them. Moreover, the pull request already contains an updated version of the requirements.txt file with the changes in place. The only thing left for the engineering team to do is review and merge the pull request. Conclusion This post illustrates a simple solution to address container image (OCI) vulnerabilities using AWS Services such as Amazon Inspector, Amazon ECR, Amazon Bedrock, Amazon EventBridge, AWS Lambda, and Amazon Fargate. The serverless and event-driven nature of this solution helps make sure of cost efficiency and minimal operational overhead. Engineering teams do not need to run additional infrastructure to implement this solution. Using generative AI and serverless technologies helps simplify what used to be a complex and laborious process. Having an automated workflow in place allows engineering teams to focus on delivering business value, thereby improving overall security posture without extra operational overhead. Checkout step-by-step deployment instructions and sample code for the solution discussed in the post in this GitHub repository. References https://aws.amazon.com/blogs/aws/amazon-bedrock-now-provides-access-to-llama-2-chat-13b-model/ https://docs.aws.amazon.com/bedrock/latest/userguide/general-guidelines-for-bedrock-users.html https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-a-prompt.html#few-shot-prompting-vs-zero-shot-prompting View the full article
  7. Traditional cybersecurity is laser-focused on incident detection and response. In other words, it’s built around a Security Operations Centre (SOC). That’s no bad thing in itself. Read between the lines, however, and that assumes we’re waiting on the threats to come to us. With cyber adversaries evolving their tactics through AI, automated ransomware campaigns, and other advanced persistent threats (APTs), adopting advanced, proactive measures has never been more critical. Except that your SOC team is already drowning in vulnerabilities and knee-jerk remediations. How can they even begin to manage this? Today’s ever worsening threat landscape calls for a strategic pivot towards the establishment of a Vulnerability Operations Centre (VOC) to rethink the foundational challenges of vulnerability management and cyber resilience. The Strategic Imperative of the VOC Traditional strategies are necessary but painfully insufficient. As an industry, we’ve predominantly been reactive, focusing on the detection and mitigation of immediate threats. This short-term perspective overlooks the underlying, ongoing challenge posed by a vast backlog of vulnerabilities, many of which have been known but unaddressed for years. Alarmingly, over 76% of vulnerabilities currently exploited by ransomware gangs were discovered more than three years ago. Either SOC teams don’t care – which we know is not true – or they can’t keep up on their own. It’s time to admit that the main problem they face is knowing which handful of threats to focus on amidst the tidal wave. The VOC provides a new approach to this challenge, offering a centralized, automated, and risk-based approach to vulnerability management. Unlike the SOC, whose primary objective is to manage incidents and alerts, the VOC is designed to predict and prevent these incidents from occurring in the first place. It focuses exclusively on the prevention, detection, analysis, prioritization, and remediation of security flaws that affect an organization's unique IT environment. By doing so, VOCs enable organizations to address the far narrower, infinitely more manageable list of vulnerabilities that pose a significant, actual threat to their operations and sensitive data. Linking SOC to VOC: A synergistic approach The synergy between the SOC and VOC is essential to creating a comprehensive security framework that not only responds to threats but proactively works to prevent them. The process of linking SOCs to VOCs begins with CISOs recognizing that patch management is not a standalone task but a core component of the broader security strategy. A dedicated team or unit, ideally under the guidance of the Chief Information Security Officer (CISO) or another appointed security leader, should spearhead the establishment of the VOC. This approach underscores the importance of a clear directive from the highest levels of cybersecurity leadership, ensuring that the VOC is not just an operational unit, but a strategic endeavor aimed at enhancing the organization's overall cyber resilience. Establishing a VOC involves leveraging existing vulnerability assessment tools to create a baseline of the current security posture. This initial step is crucial for understanding the scope and scale of vulnerabilities across the organization's assets. From this baseline, the team can aggregate, deduplicate, and normalize vulnerability data to produce a clear, actionable dataset. Integrating this dataset into the SOC’s security information and event management (SIEM) systems enhances visibility and context for security events, enabling a more nuanced and informed response to potential threats. The transition from technical vulnerability assessment to risk-based prioritization is a pivotal aspect of the VOC’s function. This involves evaluating how each identified vulnerability impacts the business, then prioritising remediation efforts based on this impact. Such a shift allows for a more strategic allocation of resources to focus on vulnerabilities that pose the highest risk to the organization. Automation must play a key role in this process, enabling routine vulnerability scans, alert prioritization, and patch deployment to be conducted with minimal human intervention. This not only streamlines operations but also allows analysts to concentrate on complex tasks that require intricate human judgment and expertise. The Immediate Power of VOC Implementation The VOC empowers cybersecurity teams with a comprehensive and systematic approach to vulnerability management, significantly simplifying the process of handling an exponentially increasing number of CVEs. The immediate benefits include: Centralization of Vulnerability Data: By aggregating and analyzing vulnerability information, the VOC provides a unified view that makes life easier for teams identifying and prioritizing critical vulnerabilities. Automation and Streamlining Processes: The use of automation tools within the VOC framework accelerates the detection, analysis, and remediation processes. This not only reduces the manual workload but also minimizes the likelihood of human error, enhancing the overall efficiency of vulnerability management. Risk-Based Prioritization: Implementing a risk-based approach allows teams to focus their efforts on vulnerabilities that pose the highest risk to the organization, ensuring that resources are allocated effectively and that critical threats are addressed ASAP. Enhanced Collaboration and Communication: The VOC fosters better collaboration across different teams by breaking down silos and ensuring that all relevant stakeholders are informed about the vulnerability management process. This shared understanding improves the organization's ability to respond to vulnerabilities swiftly and effectively. Ownership and Accountability: Centralizing operations for vulnerability management within the VOC framework ensures clear accountability and ownership across teams. This organizational clarity is vital to removing siloes and reducing risk, as it establishes well-defined roles and responsibilities for vulnerability management, ensuring that all team members understand their part in safeguarding systems and networks. That’s a lot to digest but, put simply, it’s time to rethink how we approach vulnerability management. Check the news – or better yet, check in with the rest of your cybersecurity team. A VOC reduces the crushing burden of vulnerability management on SOCs and makes the lives of all security teams that much easier. By centralizing operations, automating routine tasks, and emphasizing risk-based prioritization, the VOC enhances the organization's security posture. Linking your SOC to your future VOC creates a seamless flow of actionable intelligence directly into the threat response mechanism. The endgame? Ensuring that your organization's defense mechanisms are both proactive and responsive for a far more secure and resilient digital environment. We feature the best cloud antivirus. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro View the full article
  8. In their haste to deploy LLM tools, organizations may overlook crucial security practices. The rise in threats like Remote Code Execution indicates an urgent need to improve security measures in AI development. The post Vulnerabilities for AI and ML Applications are Skyrocketing appeared first on Security Boulevard. View the full article
  9. Read this quick guide to the types of vulnerabilities that affect containers. The post What Makes Containers Vulnerable? appeared first on Mend. The post What Makes Containers Vulnerable? appeared first on Security Boulevard. View the full article
  10. Overview Recently, NSFOCUS CERT detected that Oracle has released a security announcement and fixed two information disclosure vulnerabilities (CVE-2024-21006/CVE-2024-21007) in Oracle WebLogic Server. Due to the defects of T3/IIOP protocol, unauthenticated attackers can send malicious requests through servers affected by T3/IIOP protocol. Access to sensitive information on the target system. Affected users should take measures […] The post WebLogic T3/IIOP Information Disclosure Vulnerability (CVE-2024-21006/CVE-2024-21007) appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks.. The post WebLogic T3/IIOP Information Disclosure Vulnerability (CVE-2024-21006/CVE-2024-21007) appeared first on Security Boulevard. View the full article
  11. A Pandora's Box: Unpacking 5 Risks in Generative AI madhav Thu, 04/18/2024 - 05:07 Generative AI (GAI) is becoming increasingly crucial for business leaders due to its ability to fuel innovation, enhance personalization, automate content creation, augment creativity, and help teams explore new possibilities. This is confirmed by surveys, with 83% of business leaders saying they intend to increase their investments in the technology by 50% or more in the next six to 12 months. Unfortunately, the increasing use of AI tools has also brought a slew of emerging threats that security and IT teams are ill-equipped to deal with. Almost half (47%) of IT practitioners believe security threats are increasing in volume or severity and that the use of AI exacerbates these risks. It has become a race between security teams and advanced attackers to see who will be the first to take advantage of AI’s incredible abilities successfully. The Rising Threat Landscape Adversaries are already harnessing the power of generative AI for several nefarious purposes. Stealing the Model: AI models are the crown jewels of organizations leveraging machine learning algorithms. However, they are also prime targets for malicious actors seeking to gain an unfair advantage or disrupt operations. By infiltrating systems or exploiting vulnerabilities, adversaries can steal these models, leading to intellectual property theft and competitive disadvantage. AI Hallucinations: These are instances where artificial intelligence systems generate outputs that are not grounded in reality or are inconsistent with the intended task. These hallucinations can occur for various reasons, such as errors in the AI's algorithms, biases in the training data, or limitations in the AI's understanding of the context or task. Data Poisoning: The integrity of AI systems relies heavily on the quality and reliability of the data they are trained on. Data poisoning involves injecting malicious inputs into training datasets, corrupting the learning process, and compromising the model's performance. This tactic can manipulate outcomes, undermine decision-making processes, and even lead to catastrophic consequences in critical applications like healthcare or finance. Prompt Injection: Prompt injection attacks target natural language processing (NLP) models by injecting specific prompts or queries designed to elicit unintended responses. These subtle manipulations can deceive AI systems into generating misleading outputs or executing unauthorized actions, posing significant risks in applications such as chatbots, virtual assistants, or automated customer service platforms. Extracting Confidential Information: As AI systems process vast amounts of data, they often handle sensitive or proprietary information. Malicious actors exploit vulnerabilities within AI infrastructures to extract confidential data, including customer records, financial transactions, or trade secrets. Such breaches jeopardize privacy and expose enterprises to regulatory penalties, legal liabilities, and reputational damage. Vulnerabilities in AI There have already been instances where AI has caused vulnerabilities in popular apps and software. Security researchers from Imperva described in a blog called XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT how they discovered multiple security vulnerabilities in OpenAI’s ChatGPT that, if exploited, would enable bad actors to hijack a user’s account. The company’s researchers pinpointed two cross-site scripting (XSS) vulnerabilities, along with other security issues, in the ChatGPT backend. They outlined the process of refining their exploit from requiring a user to upload a malicious file and interact in a specific manner, to merely necessitating a single click on a citation within a ChatGPT conversation. This was accomplished by exploiting client-side path traversal and a broken function-level authorization bug in the ChatGPT backend. In another Imperva blog, Hacking Microsoft and Wix with Keyboard Shortcuts, researchers focused on the anchor tag and its behavior with varying target attributes and protocols. They noted that its inconsistent behavior can confuse security teams, enabling bugs to go unnoticed and become potential targets for exploitation. The technique shown in this blog post was instrumental in exploiting the second XSS bug they found in ChatGPT. Upholding AI Responsibility: Enterprise Strategies In the face of evolving risks, one thing is clear: The proliferation of GAI will change investment plans in cybersecurity circles. Enterprises must prioritize responsible AI governance to mitigate threats and uphold ethical standards. There are several key strategies for organizations to navigate this complex landscape: Secure Model Development and Deployment: Implement robust security measures throughout the AI lifecycle, from data collection and model training to deployment and monitoring. Employ encryption, access controls, and secure development practices to safeguard models against unauthorized access or tampering. Foster Collaboration and Knowledge Sharing: Foster a culture of collaboration and information sharing within the organization and across industry sectors. By collaborating with peers, academia, and cybersecurity experts, enterprises can stay informed about emerging threats, share best practices, and collectively address common challenges. Embrace Transparency and Accountability: Prioritize transparency and accountability in AI decision-making processes. Document model development methodologies, disclose data sources and usage policies, and establish mechanisms for auditing and accountability to ensure fairness, transparency, and compliance. Investments in Training: Security staff must be trained in AI systems and Generative AI. All this requires new investments for Cybersecurity and must be budgeted through a multi-year budgeting cycle. Developing Corporate Policies: Developing AI policies that govern the responsible use of GAI tools within the business is also critical to ensure ethical decision-making and mitigate the potential risks associated with their use. These policies can protect against biases, safeguard privacy, and ensure transparency, fostering trust both within the business and among all its stakeholders. Changing regulations Regulations are also changing to accommodate the threats posed by AI. Efforts such as the White House Blueprint for an AI Bill of Rights and the EU AI Act provide the guardrails to guide the responsible design, use, and deployment of automated systems. One of the principles is about privacy by design to ensure that default consideration of privacy protections, including making sure that data collection conforms to reasonable expectations and that only data strictly needed for the specific context is collected. Consent management is also considered critical. In sensitive domains, consumer data and related inferences should only be used for strictly necessary functions, and the consumer must be protected by ethical review and use prohibitions. Several frameworks provide appropriate guidance for implementors and practitioners. However, we are still early and in the discovery phase, so it will take time for data privacy and security regulations to evolve to include GAI-related safety considerations. For now, the best thing that organizations and security teams can do is keep learning, invest in GAI training (not only for the security professionals but all staff), and budget for incremental investments. This should help them stay a step ahead of adversaries. Data Security Luke Richardson | Product Marketing Manager, Imperva More About This Author > Schema The post A Pandora’s Box: Unpacking 5 Risks in Generative AI appeared first on Security Boulevard. View the full article
  12. UIUC researchers gave GPT-4 the CVE advisories of critical cybersecurity vulnerabilities. The model successfully knew how to exploit 87% of them. View the full article
  13. In the digital landscape, security is paramount, especially for web servers handling vast amounts of data. As per recent reports, a vulnerability has emerged within the HTTP/2 protocol, shedding light on potential Denial of Service (DoS) attacks. Let’s explore the intricacies of the HTTP/2 vulnerability, its implications, and recommended measures for safeguarding against such threats. […] The post HTTP/2 Vulnerability: Protect Web Servers from DoS Attacks appeared first on TuxCare. The post HTTP/2 Vulnerability: Protect Web Servers from DoS Attacks appeared first on Security Boulevard. View the full article
  14. Cyber attacks have become increasingly prevalent. This has caused significant adverse impacts on businesses of all sizes. According to the latest Ponemon Institute’s State of Cybersecurity Report, 66% of respondents reported experiencing a cyber attack within the last 12 months. This underscores the critical need for robust cybersecurity measures. It is across various domains such […] The post Critical RCE Vulnerability in 92,000 D-Link NAS Devices appeared first on Kratikal Blogs. The post Critical RCE Vulnerability in 92,000 D-Link NAS Devices appeared first on Security Boulevard. View the full article
  15. Explore how Veriti Research uncovers rising Androxgh0st attacks, showing that even hackers face threats, underscoring proactive security and remediation needs. The post Vulnerable Villain: When Hackers Get Hacked appeared first on VERITI. The post Vulnerable Villain: When Hackers Get Hacked appeared first on Security Boulevard. View the full article
  16. CVE-2024-3094 is a critical Remote Code Execution (RCE) vulnerability found in the popular open-source XZ Utils library. This vulnerability affects XZ Utils versions 5.6.0 and 5.6.1 and could enable unauthorized attackers to gain remote access to affected systems. About XZ Utils XZ Utils is very popular on Linux. It supports lossless data compression on almost […] The post CVE-2024-3094: RCE Vulnerability Discovered in XZ Utils appeared first on Kratikal Blogs. The post CVE-2024-3094: RCE Vulnerability Discovered in XZ Utils appeared first on Security Boulevard. View the full article
  17. HTTP/2, a widely adopted web communication protocol, organizes data transmission through a binary framing layer, wherein all communication is divided into smaller messages called frames, each identified by a specific type, such as headers, data, and continuation frames. HTTP/2 HEADER frames facilitate the transmission of HTTP headers for requests and responses, employing the HPACK encoding […] The post HTTP/2 CONTINUATION Flood Vulnerability appeared first on Blog. The post HTTP/2 CONTINUATION Flood Vulnerability appeared first on Security Boulevard. View the full article
  18. Hello fellow readers! Have you ever wondered how the GitHub Security Lab performs security research? In this post, you’ll learn how we leverage GitHub products and features such as code scanning, CodeQL, Codespaces, and private vulnerability reporting. By the time we conclude, you’ll have mastered the art of swiftly configuring a clean, temporary environment for the discovery, verification, and disclosure of vulnerabilities in open source software (OSS). As you explore the contents of this post, you’ll notice we cover a wide array of GitHub tooling. If you have any feedback or questions, we encourage you to engage with our community discussions. Rest assured, this post is designed to be accessible to readers regardless of their prior familiarity with the tools we’ve mentioned. So, let’s embark on this journey together! Finding an interesting target The concept of an “interesting” target might have different meanings for each one of you based on the objective of your research. In order to find an “interesting” target, and also for this to be fun, you have to write down some filters first—unless you really want to dive into anything! From the language the project is written in, through the surface it unveils (is it an app? a framework?), every aspect is important to have a clear objective. Using GitHub Code Search Many times, we need to search widely for the use of a specific method or library. Either to get inspiration to use it, or pwn it , GitHub code search is there for us. We can use this feature to search across all public GitHub repositories with language, path, and regular expression filters! For instance, see this search query to find uses of readObject in Java files. For example, usually one of these aspects is the amount of people using the project (that is, the ones affected if a vulnerability occurred), which is provided by GitHub’s dependency network (for example, pytorch/pytorch), but it does not end there: we are also interested in how often the project is updated, the amount of stars, recent contributors, etc. Fortunately for us, some very smart people over at the Open Source Security Foundation (OpenSSF) already did some heavy work on this topic. OpenSSF Criticality Score The OpenSSF created the Open Source Project Criticality Score, which “defines the influence and importance of a project. It is a number between 0 (least-critical) and 1 (most-critical).” For further information on the specifics of the scoring algorithm, they can be found on the ossf/criticality_score repository or this post. A few months after the launch, Google collected information for the top 100k GitHub repositories and shared it in this spreadsheet. Within the GitHub Security Lab, we are continuously analyzing OSS projects with the goal of keeping the software ecosystem safe, focusing on high-profile projects we all depend on and rely on. In order to find the former, we base our target lists on the OpenSSF criticality score. The beginning of the process We published our Code Review of Frigate in which we exploited a deserialization of user-controlled data using PyYaml’s default Loader. It’s a great project to use as the running example in this blog post, given its >1.6 million downloads of Frigate container at the time of writing and the ease of the setup process. The original issue We won’t be finding new vulnerabilities in this blog post. Instead, we will use the deserialization of user-controlled data issue we reported to illustrate this post. Looking at the spreadsheet above, Frigate is listed at ~16k with a 0.45024 score, which is not yet deemed critical (>0.8), but not bad for almost two years ago! If you are curious and want to learn a bit more about calculating criticality scores, go ahead and calculate Frigate’s current score with ossf/criticality_score. Forking the project Once we have identified our target, let’s fork the repository either via GitHub’s UI or CLI. gh repo fork blakeblackshear/frigate --default-branch-only Once forked, let’s go back to the state in which we performed the audit: (sha=9185753322cc594b99509e9234c60647e70fae6f) Using GitHub’s API update a reference: gh api -X PATCH /repos/username/frigate/git/refs/heads/dev -F sha=9185753322cc594b99509e9234c60647e70fae6f -F force=true Or using git: git clone https://github.com/username/frigate cd frigate git checkout 9185753322cc594b99509e9234c60647e70fae6f git push origin HEAD:dev --force Now we are ready to continue! Code scanning and CodeQL Code scanning is GitHub’s solution to find, triage, and prioritize fixes for existing problems in your code. Code scanning alerts in the Security tab, provided by CodeQL Pull request alerts When code scanning is “connected” with a static analysis tool like GitHub’s CodeQL, that’s when the magic happens, but we will get there in a moment. CodeQL is the static code analysis engine developed by GitHub to automate security checks. CodeQL performs semantic and dataflow analysis, “letting you query code as though it were data.” CodeQL’s learning curve at the start can be a little bit steep, but absolutely worth the effort, as its dataflow libraries allow for a solution to any kind of situation. Learning CodeQL If you are interested in learning more about the world of static analysis, with exercises and more, go ahead and follow @sylwia-budzynska’s CodeQL zero to hero series. You may also want to join GitHub Security Lab’s Slack instance to hang out with CodeQL engineers and the community. Creating the CodeQL workflow file GitHub engineers are doing a fantastic job on making CodeQL analysis available in a one-click fashion. However, to learn what’s going on behind the scenes (because we are researchers ), we are going to do the manual setup. Running CodeQL at scale In this case, we are using CodeQL on a per-repository basis. If you are interested in running CodeQL at scale to hunt zero day vulnerabilities and their variants across repositories, feel free to learn more about Multi-repository Variant Analysis. In fact, the Security Lab has done some work to run CodeQL on more than 1k repositories at once! In order to create the workflow file, follow these steps: Visit your fork For security and simplicity reasons, we are going to remove the existing GitHub Actions workflows so we do not run unwanted workflows. To do so, we are going to use github.dev (GitHub’s web-based editor). For such code changes, that don’t require reviews, rebuild, or testing, simply browse to /.github/workflows, press the . (dot) key once and a VS Code editor will pop-up in your browser. And push the changes: Enable GitHub Actions (optional) Head to the GitHub Actions tab and click on “I understand my workflows, go ahead and enable them.”Note that this might not appear if you deleted all workflows previously. Head to the Security tab Click on “Code Scanning” Click “Configure scanning tool” In CodeQL analysis, click “Set up” and then click “Advanced” Now, you are guided to GitHub’s UI file editor with a custom workflow file (whose source is located at actions/starter-workflows) for the CodeQL Action. You can notice it is fully customized for this repository by looking at the on.push.branches and strategy.matrix.language values. Actions documentation If you are not familiar with GitHub Actions, refer to the documentation to understand the basics of a workflow. At first glance, we can see that there’s an analyze job that will run for each language defined in the workflow. The analyze job will: Clone the repository Initialize CodeQL In this step, github/codeql-action/init will download the latest release of CodeQL, or CodeQL packs, that are not available locally. Autobuild The autobuild step will try to automatically build the code present in the workspace (step 1) in order to populate a database for later analysis. If it’s not a compiled language, it will just succeed and continue. Analyze The CodeQL binary will be called to finalize the CodeQL database and run queries on it, which may take a few minutes. Advanced configuration using Security Lab’s Community QL Packs With CodeQL’s default configuration (default workflow), you will already find impactful issues. Our CodeQL team makes sure that these default queries are designed to have a very low false positive rate so that developers can confidently add them to their CI/CD pipeline. However, if you are a security team like the GitHub Security Lab, you may prefer using a different set of audit models and queries that have a low false negative rate, or community-powered models customized for your specific target or methodology. With that in mind, we recently published our CodeQL Community Packs, and using it is as easy as a one-liner in your workflow file. As the README outlines, we just need to add a packs variable in the Initialize CodeQL step: - name: Initialize CodeQL uses: github/codeql-action/init@v2 with: languages: ${{ matrix.language }} packs: githubsecuritylab/codeql-${{ matrix.language }}-queries Once done, we are ready to save the file and browse the results! For more information on customizing the scan configuration, refer to the documentation. The bit I find most interesting is Using a custom configuration file. Browsing alerts A few minutes in, the results are shown in the Security tab; let’s dig in! Available filters for the repository alerts Anatomy of a code scanning alert While you may think that running CodeQL locally would be easier, code scanning provides additional built-in mechanisms to avoid duplicated alerts, prioritize, or dismiss them. Also, the amount of information given by a single alert page can save you a lot of time! Code scanning alert for deserialization of user-controlled data found by CodeQL In a few seconds, this view answers a few questions: what, where, when, and how. Even though we can see a few lines surrounding the sink, we need to see the whole flow to determine whether we want to pursue the exploitation further. For that, click Show paths. Code scanning alert for deserialization of user-controlled data found by CodeQL In this view, we can see that the flow of the vulnerability begins from a user-controllable node (in CodeQL-fu, RemoteFlowSource), which flows without sanitizers to a known PyYaml’s sink. Digging into the alert Looking at the alert page and the flow paths alone isn’t enough information to guess whether this will be exploitable. While new_config is clearly something we could control, we don’t know the specifics of the Loader that yaml.load is using. A custom Loader can inherit quite a few kinds of Loaders, so we need to make sure that the inherited Loader allows for custom constructors. def load_config_with_no_duplicates(raw_config) -> dict: """Get config ensuring duplicate keys are not allowed.""" class PreserveDuplicatesLoader(yaml.loader.Loader): pass ... return yaml.load(raw_config, PreserveDuplicatesLoader) However, we know CodeQL uses dataflow for its queries, so it should already have checked the Loader type, right? The community helps CodeQL get better When we were writing the post about Frigate’s audit, we came across a new alert for the vulnerability we had just helped fix! Our fix suggestion was to change the Loader from yaml.loader.Loader to yaml.loader.SafeLoader, but it turns out that although CodeQL was accounting for a few known safe loaders, it was not accounting for classes inheriting these. Due to this, code scanning didn’t close the alert we reported. The world of security is huge and evolving everyday. That is, supporting every source, sanitizer, and sink that exists for each one of the queries is impossible. Security requires collaboration between developers and security experts, and we encourage everyone who uses CodeQL to collaborate in any of the following forms to bring back to the community: Report the False Positives in github/codeql: CodeQL engineers and members of the community are actively monitoring these. When we came across the false positive explained before, we opened github/codeql#14685. Suggest new models for the Security Lab’s CodeQL Community Packs: Whether you’re inclined to contribute by crafting a pull request introducing novel models or queries or by opening an Issue to share your model or query concepts, you are already having a huge impact on the research community. Furthermore, the repository is also monitored by CodeQL engineers, so your suggestion might make it to the main repository impacting a huge amount of users and enterprises. Your engagement is more impactful than you might think. CodeQL model editor If you are interested in learning about supporting new dependencies with CodeQL, please see the CodeQL model editor. The model editor is designed to help you model external dependencies of your codebase that are not supported by the standard CodeQL Libraries. Now that we are sure about the exploitability of the issue, we can move on to the exploitation phase. GitHub Codespaces Codespaces is GitHub’s solution for cloud, instant and customizable development environments based on Visual Studio Code. In this post, we will be using Codespaces as our exploitation environment due to its safe (isolated) and ephemeral nature, as we are one click away from creating and deleting a codespace. Although this feature has its own billing, we will be using the free 120 core hours per month. Creating a codespace I wasn’t kidding when I said “we are one click away from creating and deleting a codespace”—simply go to “Code” and click “Create codespace on dev.” Fortunately for us, Frigate maintainers have helpfully developed a custom devcontainer configuration for seamless integration with VSCode (and so, Codespaces). Customizing devcontainer configuration For more information about .devcontainer customization, refer to the documentation. Once loaded, I suggest you close the current browser tab and instead connect to the Codespaces using VSCode along with the Remote Explorer extension. With that set up, we have a fully integrated environment with built-in port forwarding. Set up for debugging and exploitation When performing security research, having a full setup ready for debugging can be a game changer. In most cases, exploiting the vulnerability requires analyzing how the application processes and reacts to your interactions, which can be impossible without debugging. Debugging Right after creating the codespace we can see that it failed: Build error Given that there is an extensive devcontainer configuration, we can guess that it was not made for Codespaces, but for a local VSCode installation not meant to be used in the cloud. Clicking “View Creation Log” helps us find out that Docker is trying to find a non-existing device: ERROR: for frigate-devcontainer - Cannot start service devcontainer: error gathering device information while adding custom device "/dev/bus/usb": no such file or directory We need to head to the docker-compose.yml file (/workspaces/frigate/docker-compose.yml) and comment the following out: The devices property The deploy property The /dev/bus/usb volume Afterwards, we go to /workspaces/frigate/.devcontainer/post_create.sh and remove lines 5-9. After the change, we can successfully rebuild the container: Rebuilding the container Once rebuilt, we can see 6 ports in the port forwarding section. However, Frigate API, the one we are targeting through nginx, is not active. To solve that, we can start debugging by heading to the “Run and Debug” (left) panel and click the green (play-like) button to start debugging Frigate. Exploitation The built-in port forwarding feature allows us to use network-related software like Burp Suite or Caido right from our native host, so we can send the following request: POST /api/config/save HTTP/1.1 Host: 127.0.0.1:53128 Content-Length: 50 !!python/object/apply:os.popen - touch /tmp/pwned Using the debugging setup, we can analyze how new_config flows to yaml.load and creates the /tmp/pwned file. Now that we have a valid exploit to prove the vulnerability, we are ready to report it to the project. Private vulnerability reporting Reporting vulnerabilities in open source projects has never been an easy subject for many reasons: finding a private way of communicating with maintainers, getting their reply, and agreeing on so-many topics that a vulnerability covers is quite challenging on a text-based channel. That is what private vulnerability reporting (PVR) solves: a single, private, interactive place in which security researchers and maintainers work together to make their software more secure, and their dependent consumers more aware. Closing the loop Published advisories resulting from private vulnerability reports can be included in the GitHub Advisory Database to automatically disclose your report to end users using Dependabot! Note that GitHub has chosen to introduce this feature in an opt-in manner, aligning with our developer-first philosophy. This approach grants project maintainers the autonomy to decide whether they wish to participate in this reporting experience. That said, tell your favorite maintainers to enable PVR! You can find inspiration in the issues we open when we can’t find a secure and private way of reporting a vulnerability. Sending the report Once we validated the vulnerability and built a proof of concept (PoC), we can use private vulnerability reporting to privately communicate with Frigate maintainers. This feature allows for special values like affected products, custom CVSS severity, linking a CWE and assigning credits with defined roles, ensuring precise documentation and proper recognition, crucial for a collaborative and effective security community. Once reported, it allows for both ends (reporter and maintainer) to collaborate on a chat, and code together in a temporary private fork. On the maintainer side, they are one click away from requesting a CVE, which generally takes just two days to get created. For more information on PVR, refer to the documentation. Example of a published report GitHub and security research In today’s tech-driven environment, GitHub serves as a valuable resource for security researchers. With tools such as code scanning, Codespaces, and private vulnerability reporting seamlessly integrated into the platform, researchers can effectively identify and address vulnerabilities end to end. This comprehensive strategy not only makes research easier but also enhances the global cybersecurity community. By offering a secure, collaborative, and efficient platform to spot and tackle potential threats, GitHub empowers both seasoned security professionals and aspiring researchers. It’s the go-to destination for boosting security and keeping up with the constantly changing threat landscape. Happy coding and research! GitHub Security Lab’s mission is to inspire and enable the community to secure the open source software we all depend on. Learn more about their work.
  19. The internet that we use today is a massive network of interconnected devices and services. Application Programming Interfaces (APIs) are an essential but sometimes invisible technology layer that underpins services ranging from social media to online banking. APIs serve as messengers between apps, allowing them to communicate data and functionality seamlessly, making API security a […] The post 71% Website Vulnerable: API Security Becomes Prime Target for Hackers appeared first on Kratikal Blogs. The post 71% Website Vulnerable: API Security Becomes Prime Target for Hackers appeared first on Security Boulevard. View the full article
  20. I have been recently watching The Americans, a decade-old TV series about undercover KGB agents living disguised as a normal American family in Reagan’s America in a paranoid period of the Cold War. I was not expecting this weekend to be reading mailing list posts of the same type of operation being performed on open source maintainers by agents with equally shadowy identities (CVE-2024-3094). As The Grugq explains, “The JK-persona hounds Lasse (the maintainer) over multiple threads for many months. Fortunately for Lasse, his new friend and star developer is there, and even more fortunately, Jia Tan has the time available to help out with maintenance tasks. What luck! This is exactly the style of operation a HUMINT organization will run to get an agent in place. They will position someone and then create a crisis for the target, one which the agent is able to solve.” The operation played out over two years, getting the agent in place, setting up the infrastructure for the attack, hiding it from various tools, and then rushing to get it into Linux distributions before some recent changes in systemd were shipped that would have stopped this attack from working. An equally unlikely accident resulted when Andres Freund, a Postgres maintainer, discovered the attack before it had reached the vast majority of systems, from a probably accidental performance slowdown. Andres says, “I didn’t even notice it while logging in with SSH or such. I was doing some micro-benchmarking at the time and was looking to quiesce the system to reduce noise. Saw sshd processes were using a surprising amount of CPU, despite immediately failing because of wrong usernames etc. Profiled sshd. Which showed lots of cpu time in code with perf unable to attribute it to a symbol, with the dso showing as liblzma. Got suspicious. Then I recalled that I had seen an odd valgrind complaint in my automated testing of Postgres, a few weeks earlier, after some package updates were installed. Really required a lot of coincidences.” It is hard to overstate how lucky we were here, as there are no tools that will detect this vulnerability. Even ex-post it is not possible to detect externally as we do not have the private key needed to trigger the vulnerability, and the code is very well hidden. While Linus’s law has been stated as “given enough eyeballs all bugs are shallow,” we have seen in the past this is not always true, or there are just not enough eyeballs looking at all the code we consume, even if this time it worked. In terms of immediate actions, the attack appears to have been targeted at subset of OpenSSH servers patched to integrate with systemd. Running SSH servers in containers is rare, and the initial priority should be container hosts, although as the issue was caught early it is likely that few people updated. There is a stream of fixes to liblzma, the xz compression library where the exploit was placed, as the commits from the last two years are examined, although at present there is no evidence that there are exploits for any software other than OpenSSH included. In the Docker Scout web interface you can search for “lzma” in package names, and issues will be flagged in the “high profile vulnerabilities” policy. So many commentators have simple technical solutions, and so many vendors are using this to push their tools. As a technical community, we want there to be technical solutions to problems like this. Vendors want to sell their products after events like this, even though none even detected it. Rewrite it in Rust, shoot autotools, stop using GitHub tarballs, and checked-in artifacts, the list goes on. These are not bad things to do, and there is no doubt that understandability and clarity are valuable for security, although we often will trade them off for performance. It is the case that m4 and autotools are pretty hard to read and understand, while tools like ifunc allow dynamic dispatch even in a mostly static ecosystem. Large investments in the ecosystem to fix these issues would be worthwhile, but we know that attackers would simply find new vectors and weird machines. Equally, there are many naive suggestions about the people, as if having an identity for open source developers would solve a problem, when there are very genuine people who wish to stay private while state actors can easily find fake identities, or “just say no” to untrusted people. Beware of people bringing easy solutions, there are so many in this hot-take world. Where can we go from here? Awareness and observability first. Hyper awareness even, as we see in this case small clues matter. Don’t focus on the exact details of this attack, which will be different next time, but think more generally. Start by understanding your organization’s software consumption, supply chain, and critical points. Ask what you should be funding to make it different. Then build in resilience. Defense in depth, and diversity — not a monoculture. OpenSSH will always be a target because it is so widespread, and the OpenBSD developers are doing great work and the target was upstream of them because of this. But we need a diverse ecosystem with multiple strong solutions, and as an organization you need second suppliers for critical software. The third critical piece of security in this era is recoverability. Planning for the scenario in which the worst case has happened and understanding the outcomes and recovery process is everyone’s homework now, and making sure you are prepared with tabletop exercises around zero days. This is an opportunity for all of us to continue working together to strengthen the open source supply chain, and to work on resilience for when this happens next. We encourage dialogue and discussion on this within Docker communities. Learn more Docker Scout dashboard: https://scout.docker.com/vulnerabilities/id/CVE-2024-3094 NIST CVE: https://nvd.nist.gov/vuln/detail/CVE-2024-3094 View the full article
  21. In episode 323, the hosts discuss two prominent topics. The first segment discusses a significant vulnerability discovered in hotel locks, branded as ‘Unsaflok,’ affecting 3 million doors across 131 countries. The vulnerability allows attackers to create master keys from a regular key, granted access to all doors in a hotel. The co-hosts also discuss the […] The post New Hotel Lock Vulnerabilities, Glassdoor Anonymity Issues appeared first on Shared Security Podcast. The post New Hotel Lock Vulnerabilities, Glassdoor Anonymity Issues appeared first on Security Boulevard. View the full article
  22. Overview NSFOCUS CERT recently detected that a backdoor vulnerability in XZ Utils (CVE-2024-3094) was disclosed from the security community, with a CVSS score of 10. Because the SSH underlying layer relies on liblzma, an attacker could exploit this vulnerability to bypass SSH authentication and gain unauthorized access to affected systems, allowing arbitrary code execution. After […] The post XZ Utils Backdoor Vulnerability (CVE-2024-3094) Advisory appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks.. The post XZ Utils Backdoor Vulnerability (CVE-2024-3094) Advisory appeared first on Security Boulevard. View the full article
  23. On Friday March 29, Microsoft employee Andres Freund shared that he had found odd symptoms in the xz package on Debian installations. Freund noticed that ssh login was requiring a lot of CPU and decided to investigate leading to the discovery. The vulnerability has received the maximum security ratings with a CVS score of 10 and a Red Hat Product Security critical impact rating. Red Hat assigned the issue CVE-2024-3094 but based on the severity and a previous major bug being named Heartbleed, the community has cheekily named the vulnerability a more vulgar name and inverted the Heartbleed logo. Luckily the vulnerability has been caught early Red Hat wrote: "Malicious code was discovered in the upstream tarballs of xz, starting with version 5.6.0. Through a series of complex obfuscations, the liblzma build process extracts a prebuilt object file from a disguised test file existing in the source code, which is then used to modify specific functions in the liblzma code. This results in a modified liblzma library that can be used by any software linked against this library, intercepting and modifying the data interaction with this library." The malicious injection can be found only in the tarball download package of xz versions 5.6.0 and 5.6.1 libraries. The Git distribution does not include the M4 Macro that triggers the code. The second-stage artifacts are present in the Git repository for the injection during the build time, if the malicious M4 macro is present. Without the merge into the build, the 2nd-stage file is innocuous. You are recommended to check for xz version 5.6.0 or 5.6.1 in the following distributions and downgrade to 5.4.6. If you cannot you should disable public facing SSH servers. More from TechRadar Pro Best managed VPS serversCheck out our top picks for best managed WordPressScalaHosting review View the full article
  24. Understand how to respond to the announcement of the XZ Utils backdoor. The post What You Need to Know About the XZ Utils Backdoor appeared first on Security Boulevard. View the full article
  25. CVE-2024-3094 is a reported supply chain compromise of the xz libraries. The resulting interference with sshd authentication could enable an attacker to gain unauthorized access to the system. Overview Malicious code was identified within the xz upstream tarballs, beginning with version 5.6.0. This malicious code is introduced through a sophisticated obfuscation technique during the liblzma […] The post Understanding and Mitigating the Fedora Rawhide Vulnerability (CVE-2024-3094) appeared first on OX Security. The post Understanding and Mitigating the Fedora Rawhide Vulnerability (CVE-2024-3094) appeared first on Security Boulevard. View the full article
  • Forum Statistics

    63.6k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...