Search the Community
Showing results for tags 'apache'.
-
The post Setting Up LAMP (Apache, MariaDB, PHP) on Fedora 40 Server first appeared on Tecmint: Linux Howtos, Tutorials & Guides .After installing the Fedora 40 server edition, you might want to host a website on your server. To do this, you need a reliable server The post Setting Up LAMP (Apache, MariaDB, PHP) on Fedora 40 Server first appeared on Tecmint: Linux Howtos, Tutorials & Guides.View the full article
-
Written by: Jacob Thompson The Apache XML Security for C++ library, code named xml-security-c, is part of the Apache Santuario project. The library implements the XML Digital Signature and the XML Signature specifications, making them available to C++ developers. By default, the library resolves references to external URIs passed in Extensible Markup Language (XML) signatures, allowing for server-side request forgery (SSRF). There is no way to disable this feature through configuration alone, and there is no patch available; the developer must either scan their codebase to find every usage of xml-security-c and override the URI resolver to avoid SSRF, or manually patch and recompile the library to remove the capability entirely. We recommend that C++ developers using XML audit their code bases for usage of this library and determine whether they have introduced a security vulnerability, and if so, modify their code to avoid SSRF. Background Server-side request forgery (SSRF) is a class of security vulnerability in which an untrusted party tricks a server into making an HTTP request by passing the server a malicious input. Although the attacker usually cannot view the response, requests to the loopback interface (127.0.0.1), RFC 1918 addresses (e.g., 10.0.0.0/8 or 192.168.0.0/16), or any other destination occur from the point of view of the server, allowing requests that would otherwise be restricted by firewall rules or that would be impossible to perform externally. Consider the obvious consequences if a server's uninterruptible power supply offers a web service bound to 127.0.0.1:8080 without authentication and that accepts a GET request http://127.0.0.1:8080/ups/changePowerState?state=off—and what happens if this service is reachable via server-side request forgery. The Extensible Markup Language (XML) is complex and contains many optional features that are not suitable or even useful in the common case of a server accepting untrusted XML documents on an external interface. Some allow cross-site request forgery just by initializing an XML parser in its default configuration and passing an untrusted document. For example, XML External Entities allow a document to define custom entity values (analogous to < meaning < in HTML) to be replaced by the response from an external URL or the contents of a local file rather than a static string. Despite no real-world relevance to a server accepting and parsing untrusted, potentially malicious documents, this feature was enabled by default in many parsers and plagued the 2010s decade; XML External Entity Injection was promoted to an item in the OWASP Top Ten in 2017. Current versions of many XML parsers have now been hardened to treat support for external entities, document-type definitions, schemas, and so forth as an opt-in feature that is disabled by default. In this post, we present a different form of server-side request forgery affecting XML documents. We have found this issue being actively exploited; it was recently addressed by Ivanti in CVE-2024-21893. XML Signatures External URI Feature The XML Signature specification standardizes a way to digitally sign XML documents. The specification includes features that, from a security perspective, introduce additional paths to server-side request forgery into XML, beyond XML External Entity Injection. The XML Signature Syntax and Processing Version 2.0 specification states that "We RECOMMEND XML Signature applications be able to dereference URIs in the HTTP scheme," which, absent other protections such as egress firewall rules, allows for SSRF. This recommendation is carried over from version 1.1 of the specification and therefore version 1.x signatures are also affected. Figure 1 shows a simple XML document that, when parsed by xml-security-c version 2.0.4 and earlier, causes the parser to make an HTTP request to http://www.example.com/ssrf. <test> <ds:Signature xmlns:ds= "http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm= "http://www.w3.org/TR/2001/REC-xml-c14n-20010315"/> <ds:SignatureMethod Algorithm= "http://www.w3.org/2000/09/xmldsig#Manifest"/> <ds:Reference URI="http://www.example.com/ssrf"> <ds:Transforms> <ds:Transform Algorithm= "http://www.w3.org/2000/09/xmldsig#enveloped-signature"/> <ds:Transform Algorithm= "http://www.w3.org/TR/2001/REC-xml-c14n-20010315#WithComments"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>AAAAAAAAAAAAAAAAAAAAAAAAAAA=</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue>AAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAA==</ds:SignatureValue> <ds:KeyInfo> <ds:X509Data> <ds:X509SubjectName>CN=nobody</ds:X509SubjectName> </ds:X509Data> </ds:KeyInfo> </ds:Signature> </test> Figure 1: Sample XML document to trigger SSRF in affected xml-security-c library Prior Work Other open-source projects have already identified and modified their software to work around this issue. The Shibboleth xmltooling project reported a server-side request forgery vulnerability as CVE-2023-36661 and implemented a workaround in the xmltooling code to override the default, non-secure URI resolver in xml-security-c with a custom one that does nothing. While this mitigation is sufficient to resolve the issue in xmltooling—so long as every possible instance of xml-security-c is located and fixed—the root cause arguably lies in the xml-security-c library not being secure by default. Fixing the issue in xmltooling rather than upstream did not help other users of xml-security-c who were not aware of the need to reconfigure it. Dangerous XML features such as the ability to make external network requests just by parsing a document should, in our view, be disabled in the default configuration and then only enabled when parsing documents from a trusted source. In fact, a different library under the Apache Santuario project, Apache XML Security for Java, has a "secure validation" feature that is enabled by default. Among other characteristics, the secure validation feature "[d]oes not allow a Reference to call the ResolverLocalFilesystem or the ResolverDirectHTTP (references to local files and HTTP resources are forbidden)." Thus, Java developers, unlike C++ developers, are already protected against SSRF in the default configuration of the Java port of the library. The secure validation feature never made it to the C++ version. Disclosure Mandiant reported the non-secure default configuration in xml-security-c to the Apache Software Foundation (ASF). As external URI resolution is a legitimate feature in the XML Digital Signature specification, the ASF did not issue a CVE or a new release of xml-security-c. The Apache Santuario project did add a new disclaimer for xml-security-c shown in Figure 2, suggesting that XML Signatures and XML Encryption are difficult to implement securely; that xml-security-c is not secure by default and does not provide hardening configuration options; and that the library is not modular, making it difficult to ever add such features. Going forward, Apache Santuario is no longer supported as a standalone library, and the Shibboleth project will be taking over the project as a component of Shibboleth only. The developers suggest finding another solution. Figure 2: Apache Santuario added a disclaimer suggesting to not use the xml-security-c library Recommendations C++ developers should first scan their projects to determine if they use the Apache xml-security-c library. If so, the software may have a server-side request forgery vulnerability unless the code is patched. In some cases, usage of xml-security-c may be very limited, or it may be inconvenient to recompile the library when it is obtained in binary form. If developers can pinpoint each use of the XSECProvider class, they can call the setDefaultURIResolver method on the XSECProvider object, passing a custom implementation of XSECURIResolver that simply does nothing. This avoids the need to recompile xml-security-c and ensures the software remains secure if it is ever linked against the stock xml-security-c. An alternative, and in our view superior approach, is to patch the xml-security-c library to make it secure by default with regard to URI resolution. Mandiant developed a patch to supersede the vulnerable XSECURIResolverXerces with a new default XSECURIResolverNoop that does nothing, thus fixing the SSRF. By applying the patch and recompiling, the library will not be susceptible to this form of SSRF. Note that any legitimate uses of external URIs would need to be changed to manually specify XSECURIResolverXerces as the default URI resolver. The patch is available for download now (note: the download is a ZIP file, which contains the patch as a TXT file). View the full article
-
The post How to Install Apache, MySQL/MariaDB and PHP in Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides .This how-to guide explains how to install the latest version of Apache, MySQL (or MariaDB), and PHP, along with the required PHP modules, on RHEL-based The post How to Install Apache, MySQL/MariaDB and PHP in Linux first appeared on Tecmint: Linux Howtos, Tutorials & Guides.View the full article
-
This post is the second part of our two-part series on the latest performance improvements of stateful pipelines. The first part of this... View the full article
-
- databricks
- pipelines
-
(and 4 more)
Tagged with:
-
In enterprises, SREs, DevOps, and cloud architects often discuss which platform to choose for observability for faster troubleshooting of issues and understanding about performance of their production systems. There are certain questions they need to answer to get maximum value for their team, such as: Will an observability tool support all kinds of workloads and heterogeneous systems? Will the tool support all kinds of data aggregation, such as logs, metrics, traces, topology, etc..? Will the investment in the (ongoing or new) observability tool be justified? In this article, we will provide the best way to get started with unified observability of your entire infrastructure using open-source Skywalking and Istio service mesh. View the full article
-
- apache
- skywalking
-
(and 2 more)
Tagged with:
-
The post 16 Apache Web Server Security and Hardening Tips first appeared on Tecmint: Linux Howtos, Tutorials & Guides .Apache web server is one of the most popular and widely used web servers for hosting files and websites. It’s easy to install and configure The post 16 Apache Web Server Security and Hardening Tips first appeared on Tecmint: Linux Howtos, Tutorials & Guides.View the full article
-
- web servers
- hardening
-
(and 2 more)
Tagged with:
-
Today, we are excited to announce that Amazon EMR on EKS now supports managed Apache Flink, available in public preview. With this launch, customers who already use EMR can run their Apache Flink application along with other types of applications on the same Amazon EKS cluster, helping improve resource utilization and simplify infrastructure management. For customers who already run big data frameworks on Amazon EKS, they can now let Amazon EMR automate provisioning and management. View the full article
-
We are excited to launch two new features that help enforce access controls with Amazon EMR on EC2 clusters (EMR Clusters). These features are supported with jobs that are submitted to the cluster using the EMR Steps API. First is Runtime Role with EMR Steps. A Runtime Role is an AWS Identity and Access Management (IAM) role that you associate with an EMR Step. An EMR Step uses this role to access AWS resources. The second is integration with AWS Lake Formation to apply table and column-level access controls for Apache Spark and Apache Hive jobs with EMR Steps. View the full article
-
- iam
- lake formation
-
(and 7 more)
Tagged with:
-
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 3.1.1 and 3.2.0 for new and existing clusters. Apache Kafka 3.1.1 and Apache Kafka 3.2.0 includes several bug fixes and new features that improve performance. Some of the key features include enhancements to metrics and the use of topic IDs. MSK will continue to use and manage Zookeeper for quorum management in this release for stability. For a complete list of improvements and bug fixes, see the Apache Kafka release notes for 3.1.1 and 3.2.0. View the full article
-
Amazon EMR release 6.6 now supports Apache Spark 3.2, Apache Spark RAPIDS 22.02, CUDA 11, Apache Hudi 0.10.1, Apache Iceberg 0.13, Trino 0.367, and PrestoDB 0.267. You can use the performance-optimized version of Apache Spark 3.2 on EMR on EC2, EKS, and recently released EMR Serverless. In addition Apache Hudi 0.10.1 and Apache Iceberg 0.13 are available on EC2, EKS, and Serverless. Apache Hive 3.1.2 is available on EMR on EC2 and EMR Serverless. Trino 0.367 and PrestoDB 0.267 are only available on EMR on EC2. View the full article
-
GoAccess is an interactive and real-time web server log analyzer program that quickly analyze and view web server logs. It comes as an open-source and runs as a command line in Unix/Linux operating systems. The post GoAccess (A Real-Time Apache and Nginx) Web Server Log Analyzer first appeared on Tecmint: Linux Howtos, Tutorials & Guides. View the full article
-
Amazon EMR on Amazon EKS provides a new deployment option for Amazon EMR that allows you to run Apache Spark on Amazon Elastic Kubernetes Service (Amazon EKS). If you already use Amazon EMR, you can now run Amazon EMR based applications with other types of applications on the same Amazon EKS cluster to improve resource utilization and simplify infrastructure management across multiple AWS Availability Zones. If you already run big data frameworks on Amazon EKS, you can now use Amazon EMR to automate provisioning and management, and run Apache Spark up to 3x faster. With this deployment option, you can focus on running analytics workloads while Amazon EMR on Amazon EKS builds, configures, and manages containers. View the full article
-
We are pleased to announce the general availability of Amazon MSK Serverless, a type of Amazon MSK cluster that makes it easier for developers to run Apache Kafka without having to manage capacity. MSK Serverless automatically provisions and scales compute and storage resources and offers throughput-based pricing, so you can use Apache Kafka on demand and pay for the data you stream and retain. View the full article
-
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 2.8.0 for new and existing clusters. Apache Kafka 2.8.0 includes several bug fixes and new features that improve performance. Some of the key features include connection rate limiting to avoid problems with misconfigured clients (KIP-612) and topic identifiers which provides performance benefits (KIP-516). There is also an early access feature to replace zookeeper with a self-managed metadata quorum (KIP-500), however this is not recommended for use in production. For a complete list of improvements and bug fixes, see the Apache Kafka release notes for 2.8.0. View the full article
-
Developing your website from scratch can be a daunting task. It’s time-consuming and expensive if you are planning to hire a developer. An easy way to get your blog or website off the ground The post How to Install Drupal with Apache on Debian and Ubuntu first appeared on Tecmint: Linux Howtos, Tutorials & Guides. View the full article
-
Apache Flink Kinesis Consumer now supports Enhanced Fan Out (EFO) and the HTTP/2 data retrieval API for Amazon Kinesis Data Streams. EFO allows Amazon Kinesis Data Streams consumers to scale by offering each consumer a dedicated read throughput up to 2MB/second. The HTTP/2 data retrieval API reduces latency of data delivery from producers to consumers to 70 milliseconds or better. In combination, these two features allow you to build low latency Apache Flink applications that utilize dedicated throughput from Amazon Kinesis Data Streams. View the full article
-
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 2.7.0 for new and existing clusters. Apache Kafka 2.7.0 includes several bug fixes and new features that improve performance. Some key features include the ability to throttle create topic, create partition, and delete topic operations (KIP-599) and configurable TCP connection timeout (KIP-601). For a complete list of improvements and bug fixes, see the Apache Kafka release notes for 2.7.0. View the full article
-
Amazon Kinesis Data Analytics for Apache Flink now provides access to the Apache Flink Dashboard, giving you greater visibility into your applications and advanced monitoring capabilities. You can now view your Apache Flink application’s environment variables, over 120 metrics, logs, and the directed acyclic graph (DAG) of the Apache Flink application in a simple, contextualized user interface. View the full article
-
You can now build and run streaming applications using Apache Flink version 1.11 in Amazon Kinesis Data Analytics for Apache Flink. Apache Flink v1.11 provides improvements to the Table and SQL API, which is a unified, relational API for stream and batch processing and acts as a superset of the SQL language specially designed for working with Apache Flink. Apache Flink v1.11 capabilities also include an improved memory model and RocksDB optimizations for increased application stability, and support for task manager stack traces in the Apache Flink Dashboard. View the full article
-
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 2.6.0 for new and existing clusters. Apache Kafka 2.6.0 includes several bug fixes and new features that improve performance. Some key features include native APIs to manage client quotas (KIP-546) and explicit rebalance triggering to enable advanced consumer usecases (KIP-568). For a complete list of improvements and bug fixes, see the Apache Kafka release notes for 2.6.0. View the full article
-
Streaming extract, transform, and load (ETL) jobs in AWS Glue can now read data encoded in the Apache Avro format. Previously, streaming ETL jobs could read data in the JSON, CSV, Parquet, and XML formats. With the addition of Avro, streaming ETL jobs now support all the same formats as batch AWS Glue jobs. View the full article
-
Streaming extract, transform, and load (ETL) jobs in AWS Glue can now ingest data from Apache Kafka clusters that you manage yourself. Previously, AWS Glue supported reading specifically from Amazon Managed Streaming for Apache Kafka (Amazon MSK). With this update, AWS Glue allows you to perform streaming ETL on data from Apache Kafka whether it is deployed on-premises or in the cloud. View the full article
-
Forum Statistics
67.4k
Total Topics65.3k
Total Posts