Jump to content

Search the Community

Showing results for tags 'telcos'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 2 results

  1. Amid all the excitement around the potential of generative AI to transform business and unlock trillions of dollars in value across the global economy, it is easy to overlook the significant impact that the technology is already having. Indeed, the era of gen AI does not exist at some vague point in the not-too-distant future: it is here and now. The advent of generative AI marks a significant leap in the evolution of computing. For Media customers, generative AI introduces the ability to generate real time, personalized and unique interactions that weren’t possible before. This technology is not just revolutionizing the way we streamline the content creation process but it is also transforming broadcasting operations, such as discovering and searching media archives. Simultaneously, in Telco, generative AI boosts productivity by creating a knowledge based engine that can summarize and extract information from both large structures and unstructured data that employees can use to solve a customer problem, or by shortening the learning curve. Furthermore, generative AI can be easily implemented and understood by all levels of the organization without needing to know the model complexity. How generative AI is transforming the telco and media industry The telecommunications and media industry is at the forefront of integrating generative AI into their operations, viewing it as a catalyst for growth and innovation. Industry leaders are enthusiastic about its ability to not only enhance the current processes but also spearhead new innovations, create new opportunities, unlock new sources of value and improve the overall business efficiency. Communication Service Providers (CSPs) are now using generative AI to significantly reduce the time it takes to perform network-outage root-cause analysis. Traditionally, identifying the root cause of an outage involved engineers mining through several logs, vendor documents, past trouble tickets, and their resolutions. Vertex AI Search enables CSPs to extract relevant information across structured and unstructured data, and significantly shorten the time for a human engineer to identify probable causes. "Generative AI is helping our employees to do their jobs and increase their productivity, allowing them to spend more time strengthening the relationship with our customers” explains Uli Irnich, CIO of Vodafone Germany. Media organizations are using generative AI to smoothly and successfully engage and retain viewers by enabling more powerful search and recommendations. With Vertex AI, customers are building an advanced media recommendations application and enabling audiences to discover personalized content, with Google-quality results that are customized by optimization objectives. Responding to challenges with a responsible approach to development While the potential of generative AI is widely recognised, challenges to its widespread adoption still persist. On the one hand, many of these stem from the sheer size of the businesses involved, with legacy architecture, siloed data, and the need for skills training presenting obstacles to more widespread and effective usage of generative AI solutions. On the other hand, many of these risk-averse enterprise-scale organizations want to be sure that the benefits of generative AI outweigh any perceived risks. In particular, businesses seek reassurance around the security of customer data and the need to conform to regulation, as well as around some of the challenges that can arise when building generative AI models, such as hallucinations (more on that below). As part of our long-standing commitment to the responsible development of AI, Google Cloud put our AI Principles into practice. Through guidance, documentation, and practical tools, we are supporting customers to help ensure that businesses are able to roll out their solutions in a safe, secure, and responsible way. By tackling challenges and concerns head on, we are working to empower organizations to leverage generative AI safely and effectively. One such challenge is “hallucinations,” which are when a generative AI model outputs incorrect or invented information in response to a prompt. For enterprises, it’s key to build robust safety layers before deploying generative AI powered applications. Models, and the ways that generative AI apps leverage them, will continue to get better, and many methods for reducing hallucinations are available to organizations. Last year, we introduced grounding capabilities for Vertex AI, enabling large language models to incorporate specific data sources for model response generation. By providing models with access to specific data sources, grounding tethers their output to specific data and reduces the chances of inventing content. Consequently, it reduces model hallucinations, anchors the model response to specific data sources and enhances the trustworthiness of generated content. Grounding lets the model access information that goes beyond its training data. By linking to designated data stores within Vertex AI Search, the grounded model can produce relevant responses. As AI-generated images become increasingly popular, we offer digital watermarking and verification on Vertex AI, making us the first cloud provider to enable enterprises with a robust, usable and scalable approach to create AI-generated images responsibly, and identify them with confidence. Digital watermarking on Vertex AI provides two capabilities: Watermarking, which produces a watermark designed to be invisible to the human eye, and does not damage or reduce the image quality, and Verification, which determines whether an image is generated by Imagen vis a vis a confidence interval. This technology is powered by Google DeepMind SynthID, a state-of-the art technology that embeds the watermark directly into the image of pixels, making it imperceptible to the human eye, and very difficult to tamper with without damaging the image. Removing harmful content for more positive user experiences Given the versatility of Large Language Models, predicting unintended or unexpected output is challenging. To address this, our generative AI APIs have safety attribute scoring, enabling customers to test Google's safety filters and set confidence thresholds suitable for their specific use case and business. These safety attributes include "harmful categories'' and topics that can be considered sensitive, each assigned a confidence score between 0 to 1. This score reflects the likelihood of the input or response belonging to a given category. Implementing this measure is a step forward to a positive user experience, ensuring outputs align more closely with the desired safety standards. Embedding responsible AI governance throughout our processes As we work to develop generative AI responsibly, we keep a close eye on emerging regulatory frameworks. Google’s AI/ML Privacy Commitment outlines our belief that customers should have a higher level of security and control over their data on the cloud. That commitment extends to Google Cloud generative AI solutions: by default Google Cloud doesn't use customer data (including prompts, responses and adapter model training data) to train its foundation models. We also offer third-party intellectual property indemnity as standard for all customers. By integrating responsible AI principles and toolkits into all aspects of AI development, we are witnessing a growing confidence among organizations in using Google Cloud generative AI models and the platform. This approach enables them to enhance customer experience, and overall, foster a productive business environment in a secure, safe and responsible manner. As we progress on a shared generative AI journey, we are committed to empowering customers with tools and protection they need to use our services safely, securely and with confidence. “Google Cloud generative AI is optimizing the flow from ideation to dissemination,” says Daniel Hulme, Chief AI Officer at WPP. “And as we start to scale these technologies, what is really important over the coming years is how we use them in a safe, responsible and ethical way.” View the full article
  2. The introduction of 5G networking and its accompanying Service-Based Architecture (SBA) control plane brings a noteworthy shift: Instead of a traditional design consisting of proprietary signaling between physical, black-box components, SBA uses a commodity-like, microservice implementation that is increasingly cloud native, relying on standard RESTful APIs to communicate. This requires a reset in how carriers implement security, one where proven cloud concepts will likely play a significant role. This post will show how the HashiCorp suite of products, especially HashiCorp Vault’s PKI functionality, are well suited for SBAs and cater to a variety of 5G core use cases, with tight Kubernetes integrations and a focus on zero trust networking. These tools provide a strong foundation for 5G environments because many of the constructs included in SBA resemble a modern, zero trust service mesh. Vault in particular offers full PKI management and a low-resistance path for service providers seeking to minimize the software development effort required to achieve mTLS compliance. »The New Face of Telecom Networking The 3GPP standards body mandates a 5G mobile packet core based on discrete software components known as Network Functions (NF). The specifications clearly articulate requirements for NF communication pathways (known as reference points), protocols, service-based interfaces (SBI), and critically, how these network channels are secured. SBI representation of a 5G service-based architecture Orchestration platforms have opened up powerful integration, scaling, and locality opportunities for hosting and managing these NFs that were not possible in earlier manifestations of cellular technology. A mature 5G core could span multiple datacenters and public cloud regions, and scale to thousands of worker nodes. An entire Kubernetes cluster, for example, may be dedicated to the requirements of a single NF: internally, a function may consist of many pods, deployments, services, and other Kubernetes constructs. The SBI itself could be any network interface associated with an NF that is attached to the control plane network for the purpose of consuming and/or providing a service in accordance with the specification. The 5G SBA also brings new security challenges and opportunities. »Securing Network Function Communication Security architecture and procedures for 5G System (3GPP TS 33.501) is the document of record that details various security-related requirements within 5G SBA. Section 13.1.0 states: All network functions shall support mutually authenticated TLS and HTTPS as specified in RFC 7540 [47] and RFC 2818 [90]. The identities in the end entity certificates shall be used for authentication and policy checks. Network functions shall support both server-side and client-side certificates. TLS client and server certificates shall be compliant with the SBA certificate profile specified in clause 6.1.3c of TS 33.310 [5]. mTLS is a fundamental requirement within the 5G SBA for securing SBI flows at the authentication level. But what about authorization? One NF in particular is especially crucial in the context of security: the Network Repository Function (NRF) is responsible for dynamically registering all SBA components as they come online, acting as a kind of service discovery mechanism that can be queried in order to locate healthy services. In addition, the NRF has universal awareness of which functions should be permitted to freely communicate, issuing appropriately scoped OAuth2 tokens to each entity. These tokens authorize network flows between NFs, further securing the fabric. NF authentication and authorization flow There are two modes of service-to-service communication described in the 3GPP specifications. In the Direct Communication mode, NFs engage in service discovery and inter-function network operations as explained above. However, in the Indirect Communication mode, a Service Control Proxy (SCP) may optionally intercept flows and even broker discovery requests with the NRF on behalf of a consumer. Various SCP implementations can augment SBA service networking by introducing intelligent load balancing and failover, policy-based controls, and monitoring. »If it Looks Like a Mesh, Walks Like a Mesh… To summarize, the 5G SBA includes a number of broad technology constructs: Microservice architecture based on Kubernetes Hybrid-cloud/multi-cloud capabilities Service discovery and load balancing Network authentication via mTLS OAuth2 token-based authorization Optional proxy-based mode (policy and telemetry) If this is starting to sound familiar, you’re not alone. While the indirect communication mode is optional (and does not specify a sidecar proxy), these elements combined closely resemble a modern, zero trust service mesh. Intentional or not, this emergent pattern could evolve towards the same architectural trends, platforms, and abstractions being adopted elsewhere in modern software. To that end, HashiCorp‘s enterprise products cater to a variety of core 5G use cases, with tight Kubernetes integrations and a keen focus on zero trust networking: HashiCorp Terraform: Builds reliable multi-cloud infrastructure and deploys complex workloads to Kubernetes using industry-standard infrastructure as code practices HashiCorp Consul: Discovers services and secure networks through identity-based authorization HashiCorp Vault: Protects sensitive data and delivers automated PKI at scale to achieve mTLS for authenticated SBI communications HashiCorp Vault in particular presents an attractive solution for easily securing SBI flows with mTLS authentication. Vault is a distributed, highly available secrets management platform that can span multiple private and public cloud regions, accommodating a wide variety of SBA consumer personas and environments. Several replication options offer robust disaster recovery features, as well as increased performance through horizontal scaling. Vault high-availability architecture »Certificate Lifecycle Automation with Vault The PKI functionality of Vault (one of many secret engines available) is powerful, comprehensive, and simple to implement. Vault supports an arbitrary number of Certificate Authorities (CAs) and Intermediates, which can be generated internally or imported from external sources such as hardware security modules (HSMs). Fully automated cross-signing capabilities create additional options for managing 5G provider trust boundaries and network topologies. Access to Vault itself must be authenticated. Thankfully, this is a Kubernetes-friendly operation that permits straightforward integration options for container-based NF workloads. Supported authentication methods include all of the major public cloud machine-identity systems, a per-cluster native Kubernetes option, and JWT-based authentication that incorporates seamlessly with the OIDC provider built into Kubernetes. The JWT-based method is capable of scaling to support many clusters in parallel, utilizing the service account tokens that are projected to pods by default. Once successfully authenticated to Vault, a policy attached to the auth method dictates the client’s ability to access secrets within an engine. These policies can be highly granular based on a number of parameters, such as the client’s JWT token claims, Microsoft Azure Managed Identity, AWS Identity and Access Management (IAM) role, and more. Vault logical flow from authentication to secret consumption If a policy grants access to a PKI secrets engine, the client may request a certificate specifying certain parameters in the API request payload, such as: Common name Subject alternative names (SANs) IP SANs Expiry time The allowed parameters of the request are constrained by a role object configured against the PKI engine, which outlines permitted domain names, maximum TTL, and additional enforcements for the associated certificate authority. An authenticated, authorized, and valid request results in the issuance of a certificate and private key, delivered back to the client in the form of a JSON payload, which can then be parsed and rendered to the pod filesystem as specified by the NF application’s requirements and configuration. The processes described to authenticate and request certificates can be executed by API call from the primary container, aninitcontainer, or any of a number of custom solutions. To reduce the burden of developing unique strategies for each NF, organizations may instead choose to leverage the Vault Agent Injector for Kubernetes to automate the distribution of certificates. This solution consists of a mutating admission controller that intercepts lifecycle events and modifies the pod spec to include a Vault Agent sidecar container. Once configured, standard pod annotations can be used by operations teams to manage the PKI lifecycle, ensuring that certificates and private keys are rendered to appropriate filesystem locations, and are renewed prior to expiry, without ever touching the NF application code. The agent is additionally capable of executing arbitrary commands or API calls upon certificate renewal, which can be configured to include reloading a service or restarting a pod. The injector provides a low-resistance path for service providers seeking to minimize the software development effort required to achieve mTLS compliance. Vault JWT Auth Method with Kubernetes as OIDC provider Vault also integrates with Jetstack cert-manager, which grants the ability to configure Vault as a ClusterIssuer in Kubernetes and subsequently deliver certificates to Ingresses and other cluster objects. This approach can be useful if the SBI in question specifies a TLS-terminating Ingress Controller. Software vendors building 5G NFs may alternatively decide to incorporate Vault into their existing middleware or configuration automation via a more centralized API integration. For example, a service may already be in place to distribute certificates to pods within the NF ecosystem that have interfaces on the SBI message bus. This solution might rely on a legacy certificate procurement protocol such as CMPv2. Replacing this mechanism with simple HTTP API calls to Vault would not only be a relatively trivial effort, it would be a shift very much in the spirit of the 3GPP inclination towards standard RESTful, HTTP-based communications, and broader industry trends. »Working Together to Make 5G More Secure HashiCorp accelerates cloud transformation for Telcos pursuing automation, operational maturity, and compliance for 5G networks. Join the HashiCorp Telco user group to stay up to date with recent developments, blog posts, talk tracks, and industry trends. Reach out to the HashiCorp Telco team at telco@hashicorp.com. View the full article
  • Forum Statistics

    43.8k
    Total Topics
    43.3k
    Total Posts
×
×
  • Create New...