All Activity
- Yesterday
-
Monitor service dependencies with Amazon CloudWatch Application Signals SLOs
Amazon CloudWatch Application Signals now supports creating Service Level Objectives (SLOs) using metrics from your service dependencies. You can now monitor the performance of your services' dependencies, and proactively resolve problems through SLO goal setting, thanks to this new ability. Using Application Signals you can create period-based or request-based SLOs that track key metrics like latency and faults for the outgoing requests from your services to their dependencies. You can see how your dependencies perform and how this impacts the reliability of your overall service. For example, if your e-commerce service relies on a payment processor, you can set an SLO to monitor latency of requests from your createOrder operation to the payment processor. If this SLO degrades, you can quickly investigate the dependency as the potential root cause before it affects your customer-facing service. SLOs on dependencies are available in all commercial AWS Regions where CloudWatch Application Signals is available. Customers can now sign up for the new bundled pricing plan for Application Signals. To learn more, see Amazon CloudWatch pricing. View the full article
-
Chinese tech giants boosted Nvidia GPU purchases by 4x to 6x during Q1
Chinese tech giants continue to buy Nvidia AI GPUs despite DeepSeek impact, demand slowdown. View the full article
-
Amazon Security Lake achieves FedRamp High and Moderate authorization
Amazon Security Lake has achieved FedRAMP High authorization in AWS GovCloud (US) Region and FedRAMP Moderate in the US East and US West Regions. If you’re a federal agency, public sector organization, or enterprise with FedRAMP compliance requirements, you can now centralize your security data using Amazon Security Lake. Amazon Security Lake automatically centralizes security data from AWS environments, SaaS providers, on premises, and cloud sources into a purpose-built data lake stored in your account. With Security Lake, you can get a more complete understanding of your security data across your entire organization. You can also improve the protection of your workloads, applications, and data. Security Lake has adopted the Open Cybersecurity Schema Framework (OCSF), an open standard that is part of the Linux Foundation. With OCSF support, the service normalizes and combines security data from AWS and a broad range of enterprise security data sources. You can start with a 15-day free trial of Amazon Security Lake with a single-click in the AWS Management console. To learn more and get started, see the following resources: How to develop an Amazon Security Lake POC Understanding Amazon Security Lake Costs View the full article
-
Announcing the Built-On Databricks Startup Challenge
Are you a startup building core, customer-facing B2B products on Databricks? Then we have a Challenge for you! On the heels of our Generative AI...View the full article
-
Vibe coding isn’t here to take developer jobs. It’s here to transform them into AI architects
Vibe coding—creating and editing software simply by giving instructions to AI—enables businesses and individuals to unleash their creativity without requiring a developer. Some worry that vibe coding will replace developers, but that’s not the case. This trend proves that programming is evolving, and those who adapt will find more opportunities, not fewer... This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro View the full article
-
Amazon CloudWatch Logs increases maximum log event size to 1 MB
Amazon CloudWatch Logs now supports log events up to 1 MB in size, a 4x increase from the previous 256 KB limit. This enhancement applies to the CloudWatch Logs PutLogEvents API and OpenTelemetry Protocol (OTLP) endpoint. Customers can now capture richer log data while maintaining data integrity, eliminating the need to truncate large events or split them across multiple entries. It is especially valuable for use cases such as stack traces, debug outputs, and detailed application and security audit logs, enabling simplified troubleshooting, enhanced security audit capabilities, and better visibility into application behavior. The increased limit is automatically available in all AWS Regions where CloudWatch Logs is available, including the AWS GovCloud (US) Regions. For more information, visit the CloudWatch Logs documentation. View the full article
-
Russian spy infiltrates ASML and NXP to steal technical data necessary to build 28nm-capable fabs
A Russian engineer is accused of leaking confidential technical data from ASML, NXP, TSMC, and GlobalFoundries to Russia, allegedly to support construction of a 28nm-capable fab. View the full article
-
Defective RX 9070 XT card with pitted silicon surface runs extremely hot — report indicates it's unclear if this was an isolated incident
A PowerColor RX 9070 XT Hellhound was found to have defective silicon out of the factory, resulting in unsustainable hotspot temperatures that caused the GPU to overheat and throttle. View the full article
-
WD's Blue SN5000 4TB SSD is only $209 while supplies last
The WD Blue SN5000 is a single-sided 4TB SSD that pairs well with a PC or the PS5 and is on sale at Amazon for just $209. View the full article
-
Amazon CloudFront supports VPC Origin modification with CloudFront Functions
In November 2024, CloudFront Functions introduced origin modifications, allowing you to conditionally change origin servers on each request. Starting today, you can now use this capability with VPC Origins and origin groups, enabling you to create even more sophisticated routing policies for your applications delivered from CloudFront. You can now create dynamic routing policies that direct individual requests between any origin, including VPC Origins, by simply providing the ID for the origin. For example, you can automatically route each request to different applications by creating weights to send a certain percentage of traffic to multiple backend services, all without updating your distribution configuration. You can also create new origin groups dynamically, with the ability to set multiple origins with failover criteria. For example, you can create custom failover logic to update the primary and failover origins based on viewer location or request headers to ensure viewers have the lowest possible latency. These features are now available within CloudFront Functions at no additional charge. For more information, see the CloudFront Developer Guide. For examples of how to use origin modification, see our GitHub examples repository. View the full article
-
Announcing enhanced autoscaling for Amazon OpenSearch Ingestion pipelines
Amazon OpenSearch Ingestion now supports enhanced autoscaling capabilities, allowing pipelines to scale dynamically based on additional parameters, including Amazon SQS queue size, persistent buffer lag, and the number of incoming HTTP connections. These enhancements improves upon the existing scaling mechanism, which previously relied only on memory and CPU utilization, providing a more comprehensive and responsive scaling mechanism for your data ingestion workloads. With these improvements, customers can build more resilient and efficient data ingestion pipelines that automatically adapt to varying workloads. The new autoscaling parameters help optimize resource utilization, reduce ingestion bottlenecks, and improve overall pipeline performance, making it easier to handle high-throughput data streams for log analytics, observability, and security analytics use cases. The enhanced autoscaling capabilities are now available in all AWS Regions where Amazon OpenSearch Ingestion is currently offered. You can take advantage of these improvements by updating your existing pipelines or creating new pipelines through the Amazon OpenSearch Service console or APIs at no additional cost. To learn more, see the Amazon OpenSearch Ingestion webpage and the Amazon OpenSearch Service Developer Guide. View the full article
-
IAM Identity Center extends sessions and TIP management capabilities for customers with Microsoft AD
AWS IAM Identity Center enhanced its session management and trusted identity propagation (TIP) capabilities for customers that connect Microsoft Active Directory (AD) as their identity source. The enhanced capabilities help customers manage user sessions, scale their use of AWS applications, such as Amazon Q Developer Pro, and implement use cases, such as for analytics, with trusted identity propagation. With this release, customers who connect Microsoft AD to IAM Identity Center will be able to: (a) configure the session duration for AWS applications and the AWS access portal from a minimum of 15 minutes to a maximum of 90 days; (b) list and delete active user sessions; (c) configure an extended 90-day session duration for Amazon Q Developer Pro, while maintaining shorter session duration for other AWS applications; and (d) enable TIP from business intelligence applications that authenticate users via a third party identity provider to AWS services, such as Amazon Redshift and Amazon Q Business. IAM Identity Center is the recommended service for managing workforce access to AWS applications and multiple AWS accounts. It enables you to connect your existing source of workforce identities to AWS once and offer your users single sign on experience across AWS. It powers the personalized experiences offered by AWS applications, such as Amazon Q; and the ability to define and audit user-aware access to data in AWS services, such as Amazon Redshift. It helps you manage access to multiple AWS accounts from a central place. IAM Identity Center is available at no additional cost in these AWS Regions. Learn more here. View the full article
-
Amazon SageMaker now offers 9 additional visual ETL transforms
Visual ETL in Amazon SageMaker now offers 9 new built-in transforms: “Derived column”, “Flatten”, “Add current timestamp”, “Explode array or map into rows”, “To timestamp”, “Array to columns”, “Intersect”, “Limit” and “Concatenate columns”. Visual ETL in Amazon SageMaker provides a drag-and-drop interface for building ETL flows and authoring flows with Amazon Q Developer. With these new transforms, ETL developers can quickly build more sophisticated data pipelines without having to write custom code for common transform tasks. Each of these new transforms address a unique data processing need. For example, use “Derived column” to define a new column based on a math formula or SQL expression, use “To timestamp” to convert a column to timestamp type, or build a new string column using the values of other columns with an optional spacer with the “Concatenate columns” transform. This new feature is now available in all AWS regions where Amazon SageMaker is available. Access the supported region list for the most up-to-date availability information. To learn more, visit our Amazon SageMaker documentation. View the full article
-
Amazon RDS Proxy announces TLS 1.3 support for PostgreSQL on Aurora and RDS
Amazon Relational Database Service (RDS) Proxy now supports version 1.3 of the Transport Layer Security (TLS) protocol for Proxy connections to Amazon Aurora PostgreSQL and RDS for PostgreSQL database instances. TLS 1.3 provides improved security through stronger cryptographic algorithms and simplified handshake process as compared to older TLS versions. With this release, RDS Proxy can use TLS 1.3 for connections to Aurora PostgreSQL and RDS for PostgreSQL databases. During connection establishment, Proxy will automatically negotiate the most secure supported TLS version supported with the database. Customers can also configure their PostgreSQL database to require TLS 1.3, by setting the ssl_min_protocol_version parameter in their parameter group. TLS 1.3 is already supported for connections to RDS Proxy, as well as for RDS Proxy connections to MySQL engines. RDS Proxy is a fully managed and a highly available database proxy for RDS and Amazon Aurora databases. RDS Proxy helps improve application scalability, resiliency, and security. For information about TLS version support and related configuration on Aurora, please review Aurora documentation. For information on supported database engine versions and regional availability of RDS Proxy, refer to our RDS and Aurora documentations. View the full article
-
Amazon Connect now allows supervisors to take additional actions on in-progress chats
Amazon Connect now allows supervisors to take additional actions on in-progress chats directly from the Amazon Connect UI, accelerating issue resolution and improving customer satisfaction. For example, supervisors can now end chats with inactive customers or reassign chats to specific agents or queues. To learn more, please refer to the help documentation or visit the Amazon Connect website. This feature is available in all commercial AWS regions where Amazon Connect is available. View the full article
-
Amazon RDS Proxy is now available in the AWS GovCloud (US) Regions
Amazon Relational Database Service (RDS) Proxy is now available in the AWS GovCloud (US-West) and AWS GovCloud (US-East) Regions. RDS Proxy is a fully managed and a highly available database proxy for RDS and Amazon Aurora databases. RDS Proxy helps improve application scalability, resiliency, and security. Many applications, including those built on modern architectures capable of horizontal scaling based on ebb and flow of active users, can open a large number of database connections or open and close connections frequently. This can stress the database’s memory and compute, leading to slower performance and limited application scalability. Amazon RDS Proxy sits between your application and database to pool and share established database connections, improving database efficiency and application scalability. In case of a failure, Amazon RDS Proxy automatically connects to a standby database instance within a region. With Amazon RDS Proxy, database credentials and access can be managed through AWS Secrets Manager and AWS Identity and Access Management (IAM), eliminating the need to embed database credentials in application code. For information on supported database engine versions and regional availability of RDS Proxy, refer to the RDS Proxy RDS and Aurora documentation. View the full article
-
AWS Clean Rooms Spark SQL now supports aggregation and list analysis rules
With today’s launch, AWS Clean Rooms provides additional privacy-enhancing controls to support aggregation and list analysis rules using the Spark analytics engine. Using AWS Clean Rooms Spark SQL, you and your partners can now manage how your data is used with aggregation, list, and custom analysis rules, running SQL queries with configurable resources based on your performance, scale, and cost requirements. For example, advertisers can use list analysis rules to create targeted audience segments from collective advertiser and publisher data sets without sharing the raw data used to create the segments. Similarly, publishers and their partners can run media planning and campaign measurement analyses across their data sets using aggregation rules to compile joint statistics results, protecting the underlying data of all collaborators. Additionally, you can now update an existing AWS Clean Rooms collaboration to use the Spark analytics engine instead of creating a new collaboration, making it easier to get started with AWS Clean Rooms Spark SQL. AWS Clean Rooms Spark SQL is generally available in these AWS Regions. AWS Clean Rooms helps companies and their partners more easily analyze and collaborate on their collective datasets without revealing or copying one another’s underlying data. To learn more, visit AWS Clean Rooms. View the full article
-
AWS CDK Construct Library for Amazon EventBridge Scheduler now generally available
Amazon Web Services (AWS) announces the general availability of the AWS Cloud Development Kit (AWS CDK) L2 construct library for Amazon EventBridge Scheduler. This construct library allows developers to programmatically create, configure, and manage scheduled tasks using infrastructure as code with their preferred programming language, simplifying the process of building event-driven applications. The EventBridge Scheduler construct library enables you to define schedules using cron or rate expressions, configure target destinations including AWS Lambda functions, Amazon SQS queues, and other AWS services, and manage execution windows and retry policies. Developers can now leverage type-safe programming languages to define their scheduling infrastructure, improving code maintainability and reducing configuration errors. The AWS CDK construct library for Amazon EventBridge Scheduler is available in all AWS Regions where Amazon EventBridge Scheduler is available. To get started, visit the following resources: Amazon EventBridge Scheduler documentation AWS CDK API Reference for EventBridge Scheduler AWS CDK Developer Guide View the full article
-
AWS CDK L2 Construct for Amazon Cognito Identity Pools now generally available
Amazon Web Services (AWS) announces the general availability of the AWS Cloud Development Kit (AWS CDK) L2 construct for Amazon Cognito Identity Pools. This library enables developers to programmatically define and deploy Identity Pool resources using familiar programming languages, making it easier to grant users secure access to AWS services in their applications. With this construct library, you can define Identity Pools as infrastructure as code, configure authentication providers like Amazon Cognito User Pools, social identity providers (Facebook, Google, Apple, Amazon), and SAML 2.0 providers. The library helps you implement security best practices by default and reduces the complexity of managing authentication and authorization for your web and mobile applications. The AWS CDK construct library for Amazon Cognito Identity Pools is available in all AWS Regions where Amazon Cognito is available. To get started, visit the following resources: Amazon Cognito Identity Pools documentation AWS CDK API Reference View the full article
-
AMD pins Ryzen 9000 'failures' on compatibility issues — BIOS update recommended to avoid boot problems
AMD Ryzen 7 9800X3D issues on ASRock motherboards have been caused by a BIOS issue, which has since been rectified. View the full article
-
Instance Replication now available for Filestore
We are excited to announce Filestore Instance Replication on Google Cloud, which helps customers meet their business continuity goals and regulatory requirements. The feature offers an efficient replication point objective (RPO) that can reach 30 minutes for data change rates of 100 MB/sec. Our customers have been telling us they need to meet regulatory and business requirements for business continuity, and have been looking for file storage that provides that capability. Instance Replication lets customers replicate Filestore instances to a secondary location - a remote region, or a separate zone within a region. The feature continuously replicates increments and changes in data taking place on the active instance to the standby instance in the secondary location. The process of replicating an instance is simple: A new designated standby instance is created in the remote location The feature performs an initial sync moving all data from the active source instance to the standby replica instance Upon completion, incremental data is continuously replicated An RPO metric lets customers monitor the replication process In the event of an outage in the source region, customers can break the replication Customers can simply connect their application to the replica instance and continue their business - with minimal data loss. It can take as little as 2 minutes to set up, monitoring is simple, and breaking the replication is achieved using a single command. The feature is available on Filestore Regional, Zonal, Enterprise and High Scale tiers. Instance Replication functionality is provided at no charge and customers are billed for the components used in the service, which are the Filestore instances and cross-regional networking. Give it a try here. View the full article
-
Databricks on Google Cloud: Innovations Driving Data Intelligence
Since our launch on Google Cloud Platform (GCP) in 2021, Databricks on Google Cloud has provided more than 1,500 joint customers with a tightly integrated...View the full article
-
RTX 5080 laptop GPU beats RTX 4090 counterpart — delivers 10% less performance than RTX 5090
RTX 5080 laptop benchmark results show only an 8% to 15% difference from the top-end RTX 5090 laptop. View the full article
-
Applying Terraform Changes to Specific Resources using the -target Argument
Managing Infrastructure as Code (IaC) with Terraform is as common as CI/CD pipelines and incident response playbooks. However, there are moments when you don’t want Terraform to touch everything. Maybe you need to quickly redeploy an Azure Function App, or perhaps a specific Storage Account needs an urgent configuration change without disturbing unrelated resources. The […] The article Applying Terraform Changes to Specific Resources using the -target Argument was originally published on Build5Nines. To stay up-to-date, Subscribe to the Build5Nines Newsletter. View the full article
-
President Trump's 25% tariff on aluminum sparks concerns over rising PC enclosure and GPU costs
PC vendors initially believed the 25% tariff applied only to raw aluminum and steel, but the policy also extends to finished products View the full article