Search the Community
Showing results for tags 'consul'.
-
Migrating from HashiCorp Consul service discovery to service mesh is a smart move for platform teams looking to boost their applications’ security, observability, and availability, all without requiring modifications from their development teams. This blog post will briefly introduce you to the advantages of moving to a service mesh and provide a step-by-step, no-downtime migration guide. Service mesh benefits Why is a service mesh migration worth your time? Here are some of the benefits: Advance security Service mesh allows teams to quickly enforce zero trust security principles using mTLS on all east/west traffic, significantly reducing the risk of unauthorized access and data breaches. Platform teams can enable an application’s existing service discovery DNS lookup to allow for both HTTP and mTLS connections. This allows all applications to transition to using mTLS connections without impacting any of their dependent services (such as downstreams). Enhance observability It also provides application teams with new capabilities such as distributed tracing and data plane metrics. Distributed tracing acts like a GPS tracking system for each request, providing detailed insights into its journey across services, and helping quickly pinpoint bottlenecks and performance issues. Data plane metrics offer real-time insights into traffic flows between microservices that include requests per second, error rates, and advanced L7 features such as retries. These insights can improve decision-making and lead to higher application availability. Increase resilience Service mesh improves application availability by automatically handling retries, rate limiting, circuit breaking, and timeouts, helping to ensure that services remain accessible and performant, even under adverse conditions. Applications in a service mesh use traffic splitting for blue/green or canary deployments to reduce risks associated with updates and new releases. Improve multi-tenancy scalability If you need to give users self-service capabilities in multi-tenant environments or meet higher compliance requirements, consider upgrading to Consul Enterprise. With the power to manage their own namespaces or even entire service meshes, Consul Enterprise gives teams the autonomy to innovate and streamline operations. It ensures team isolation, enabling the safe management of application deployments and resilience strategies. Beyond operational agility, Consul Enterprise empowers teams to comply with rigorous regulations, by offering L3/L4 networking control over service mesh connections, FIPS 140–2 compliance, and full audit logs. This enhanced level of governance and flexibility allows teams to fine-tune their service ecosystems to meet specific operational demands and compliance needs. Migration to service mesh Now that we’ve explored the top reasons to switch to Consul service mesh, it’s time to walk through the migration, step by step. We’ll begin with an overview of the Amazon EKS cluster and the Consul components that will be deployed. In this guide, the Consul server and example services will be deployed on the same EKS cluster for simplicity. However, the principles and steps outlined are also relevant for environments utilizing virtual machines or combination of platforms. The EKS cluster in this guide will run a legacy api service, using only service discovery, and the new mesh-enabled web service that is accessible only through the Consul API gateway. The diagram below shows the initial environment that will be setup: To streamline the initial setup, the following key steps are condensed into bullet points, with detailed step-by-step instructions available in the README.md for this project’s GitHub repo. Provision infrastructure: Use HashiCorp Terraform to provision an AWS VPC and EKS cluster. This includes cloning the repository, initializing Terraform, and applying infrastructure as code to set up the environment. Connect to EKS: Update the kubeconfig with the EKS cluster details using the AWS CLI and set up convenient kubectl aliases for easier management. Install the AWS LB controller: Set up the AWS load balancer controller to map internal Network load balancers or Application load balancers to Kubernetes services. The Consul Helm chart will use AWS LB annotations to properly set up internally routable mesh gateways and make the Consul UI externally available. Install Consul Helm chart: Deploy the example Consul Helm chart values enabling the following components: TLS: Enables TLS across the cluster to verify the authenticity of the Consul servers and clients Access Control Lists: Automatically manage ACL tokens and policies for all of Consul connect-inject: Configures Consul’s automatic service mesh sidecar injector api-gateway: Enables the Consul API gateway and manages it with Kubernetes Gateway API CRDs sync-catalog: A process that syncs Kubernetes services to Consul for service discovery cni: Facilitates service mesh traffic redirection without requiring CAPNETADMIN privileges for Kubernetes pods metrics: Exposes Prometheus metrics for Consul servers, gateways, and Envoy sidecars Setup DNS forwarding in EKS: Configure DNS forwarding within EKS to allow service discovery via Consul. Deploy service using Consul service discovery: Deploy api and Kubernetes catalog-sync to automatically register the service with Consul the same way VMs register services using Consul agents. Deploy service using Consul service mesh: Deploy the service web into the mesh. Mesh-enabled services aren’t available externally without a special ingress or API gateway allowing the traffic. Set up the Consul API gatewaywith a route to web so it's accessible from the browser. The steps above complete the initial setup. Consul is installed on EKS, and the web service is operational within the service mesh, directing requests to the api service outside the mesh, which utilizes service discovery exclusively. The Consul API gateway has been set up with routes to enable external requests to web. Run the command below to retrieve the URL for the Consul API gateway and store it in a variable for future use. The external address may take a couple minutes to propagate, so be patient. export APIGW_URL=$(kubectl get services --namespace=consul api-gateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') nslookup ${APIGW_URL}Once the gateway is resolvable, use the generated URL below to access web and verify the initial environment is working as expected. echo "http://${APIGW_URL}/ui"The image above shows the expected response: web is within the mesh accessing api.service.consul, which is located outside the mesh. Traffic between web and api is HTTP and unencrypted. Now it’s time to migrate api into the service mesh: Migrate services into the service mesh To smoothly migrate services into the service mesh, we'll follow a clear, three-step approach: Step 1: Enable permissive mode Step 2: Enforce mTLS Step 3: Use virtual services Step 1: Enable permissive mode To begin, you need to migrate api into the mesh. It’s crucial that HTTP requests to api.service.consul continue to function for downstream services not in the service mesh, while services within the mesh use mTLS for secure communication. The first step is implementing permissive MutualTLSMode for api, allowing it to accept both HTTP and mTLS connections. To enable permissive MutualTLSMode, the api service defaults need to configure MutualTLSMode to permissive. Here’s an example for ServiceDefaults: apiVersion: consul.hashicorp.com/v1alpha1 kind: ServiceDefaults metadata: name: api namespace: api spec: protocol: http mutualTLSMode: "permissive"Create a new deployment for api enabling service mesh and apply these new ServiceDefaults to enable permissive mode: kubectl apply -f api/permissive_mTLS_mode/init-consul-config/servicedefaults-permissive.yaml kubectl apply -f api/permissive_mTLS_mode/api-v2-mesh-enabled.yamlRefresh the browser a few times and watch how the same requests from web to api.service.consul are routed to both api (non-mesh) and api (mesh) deployments. Consul uses a weighted round-robin load balancing algorithm by default to distribute requests from web across both api deployments. After verifying the api (mesh) deployment is working with the original DNS lookup api.service.consul, remove the original api (non-mesh) deployment: kubectl -n api delete deployment api-v1Newly onboarded services can run in permissive mode while other downstream and upstream services are migrated to the service mesh in any order. This ensures a smooth transition for all services. Services can be onboarded to the mesh upon their next release using an annotation or by enabling the entire namespace, which doesn’t require changes from the development team. While in permissive mode, requests to the original service discovery name api.service.consul will be over HTTP. Verify this by sniffing the incoming traffic to the api pod while refreshing the browser to generate traffic: kubectl debug -it -n api $(kubectl -n api get pods --output jsonpath='{.items[0].metadata.name}') --target consul-dataplane --image nicolaka/netshoot -- tcpdump -i eth0 src port 9091 -A Targeting container "consul-dataplane". If you don't see processes from this container it may be because the container runtime doesn't support this feature. Defaulting debug container name to debugger-v77g6. If you don't see a command prompt, try pressing enter. { "name": "api (mesh)", "uri": "/", "type": "HTTP", "ip_addresses": [ "10.15.3.183" ], "start_time": "2024-02-16T19:46:35.805652", "end_time": "2024-02-16T19:46:35.827025", "duration": "21.372186ms", "body": "API response", "code": 200 } Follow these steps to migrate all downstream and upstream services into the service mesh without impacting service availability or development teams. Step 2: Enforce mTLS After migrating all dependent downstream services into the mesh, disable permissive mode and start enforcing secure mTLS connections for all requests to api. To avoid any downstream service changes or disruptions, configure the service mesh to properly handle the original DNS lookups, so web can continue making requests to api.service.consul. During this step, switch api from permissive to strict mutualTLSMode to enforce mTLS for all requests. To ensure downstream services, such as webusing api.service.consul, aren’t impacted, set the dialedDirectly transparent proxy mode. This action enables a TCP passthrough on the api service’s Envoy sidecar proxy. This enforces mTLS on requests going to the api pod IP. This means requests for api.service.consul will be routed to the api pod IP where the proxy is now listening and enforcing mTLS. These two settings can be updated while the api service is running. To enable strict MutualTLSMode and dialedDirectly, update the api ServiceDefaults. kubectl apply -f ./api/permissive_mTLS_mode/init-consul-config/intention-api.yaml kubectl apply -f ./api/permissive_mTLS_mode/servicedefaults-strict-dialedDirect.yaml.enableNote: Before enabling strict mutualTLSMode, a service intention is created first to ensure web is authorized to make requests to api. Now all requests to api.service.consul are being encrypted with mTLS: kubectl debug -it -n api $(kubectl -n api get pods --output jsonpath='{.items[0].metadata.name}') --target consul-dataplane --image nicolaka/netshoot -- tcpdump -i eth0 src port 9091 -A Targeting container "consul-dataplane". If you don't see processes from this container it may be because the container runtime doesn't support this feature. Defaulting debug container name to debugger-g669d. If you don't see a command prompt, try pressing enter. 20:18:34.047169 IP api-v2-b45bf7655-9kshs.9091 > 10-15-3-175.web.web.svc.cluster.local.43512: Flags [P.], seq 148:626, ack 3559, win 462, options [nop,nop,TS val 3923183901 ecr 2279397636], length 478 E....;@....' ... ...#.....k.f4m............ .. ...................6.@nW.S._r"h....m.@;U....WyY........h........m......q.B.......N.Y}.F.A.{y..^..........]..@0.zv">Y#.....6.n.z..Oh.6.p..G.....9...@0.zv.y.......#U.......h.o..w6.....`.\......*...N..u.".U...`\.;....M..=.....$..,....e...T`.I/.a.z.$;...c........z..Y..q...W.."...........%.*... .3..Y/.....a..R(..6..0...Ka`.GIt._.Dn...N......L k..j...ch.7)'......m/........3....t."....r..4|t7..Q..vfs.....I..*..|..4m%......c..!w7u..s.......t.,.....EF7....Bd...P..........E....h..3;n..........+. Congratulations! You have successfully migrated an existing service into the Consul service mesh and enforced mTLS without requiring any changes from development. Step 3: Use virtual services For development teams to take full advantage of the L7 traffic capabilities such as retries, rate limits, timeouts, circuit breakers, and traffic splitters, they will want to start using virtual services. For example, web would stop making requests to api.service.consul and start using api.virtual.consul. Once web is updated to use the virtual address, it will have immediate access to all L7 traffic routing rules applied to api. These capabilities provide huge improvements in service availability that any development team will appreciate, and they can make this change at their convenience. Here’s how: Deploy web-v2, which has been updated to use api.virtual.consul. Refresh the browser until you see requests from web-v2 route to the new virtual address (you may need to clear the cache). Once validated, delete web-v1 to ensure all requests use the new virtual address: kubectl apply -f api/permissive_mTLS_mode/web-v2-virtualaddress.yaml.enable kubectl -n web delete deploy/web-v1web is now making requests to api.virtual.consul. That means api can now create traffic splitters to support canary deployments, or retries to improve availability, and web will automatically apply them with every request to api. Once all downstream services are using the virtual address, disable dialedDirectly for api to ensure L7 traffic patterns are being applied to all future requests (included in the ServiceDefaults recommendation example below). Additional security recommendations Following the migration, there are several ways to further secure your service mesh. First, remove the MutualTLSMode line from the service defaults for each service. This will enforce the strict mode and reduce misconfiguration risks for a critical security setting: apiVersion: consul.hashicorp.com/v1alpha1 kind: ServiceDefaults metadata: name: api namespace: api spec: protocol: http #mutualTLSMode: "strict" transparentProxy: #dialedDirectly: trueNext, disable the EnablingPermissiveMutualTLS mode mesh-wide so no services can enable permissive mode in the future and bypass mTLS. Note: If services were already able to set their MutualTLSMode=permissive, this mesh-wide setting will not override those services already running in permissive mode because doing so could impact service availability. Those services must first remove permissive MutualTLSMode, as recommended above: apiVersion: consul.hashicorp.com/v1alpha1 kind: Mesh metadata: name: mesh namespace: consul spec: #allowEnablingPermissiveMutualTLS: trueAdditionally, secure the mesh by setting meshDestinationsOnly: true to restrict any service from making external requests. A terminating gateway would now be required to authorize all external requests: apiVersion: consul.hashicorp.com/v1alpha1 kind: Mesh metadata: name: mesh namespace: consul spec: #allowEnablingPermissiveMutualTLS: true transparentProxy: meshDestinationsOnly: trueApply these additional security recommendations using the following commands: kubectl apply -f api/permissive_mTLS_mode/init-consul-config/servicedefaults-std.yaml.enable kubectl apply -f web/init-consul-config/mesh-secure.yaml.enableRecap Transitioning from Consul service discovery to service mesh brings immediate enhancements in zero trust security and observability. By following the three-step approach described in this blog post, platform teams can smoothly transition to service mesh without modifying current application configurations. This approach benefits organizations that have numerous development teams, operate in silos, or face communication hurdles. Initially, enabling permissive MutualTLSMode allows services to support both HTTP and mTLS connections, ensuring accessibility across mesh and non-mesh services. Subsequently, enforcing mTLS secures all traffic with encryption, and setting dialedDirectly supports all existing requests using Consul DNS. Finally, adopting virtual services unlocks advanced Layer 7 traffic-management features, allowing developers to enhance service reliability at their own pace by simply updating request strings from service to virtual. As your service mesh and multi-tenant ecosystem grow, you might encounter increasing demands for self-service options and higher compliance standards. Learn how Consul Enterprise extends the foundational capabilities of Consul with enhanced governance, multi-tenant support, and operational agility, ensuring organizations can meet the demands of complex service ecosystems and regulatory standards with ease. View the full article
-
We're excited to announce that HashiCorp Consul 1.18 is now generally available. This release introduces significant enhancements for HashiCorp Consul, our service networking solution designed to help users discover and securely connect any application across any cloud or runtime. These new capabilities aid organizations in increasing enterprise reliability and scale, facilitating easier deployment and management of distributed applications across various environments. This blog post will take a closer look at the key enhancements in Consul 1.18: Long-Term Support (LTS) releases for Consul Enterprise Fault injection for Consul Enterprise service mesh Consul ECS runtime enhancements: Transparent proxy, API gateways, and terminating gateways Enterprise reliability Two of Consul 1.18’s major new features enhance enterprise reliability, reduce maintenance burden, and enable service resiliency improvements. Long-Term Support releases (Enterprise) We’re pleased to introduce a Long-Term Support (LTS) release program for self-managed Consul Enterprise, starting with versions 1.15 and 1.18. This program designates the first major release of each calendar year, typically in late February, as an LTS release. The annual LTS release will receive critical fixes and security patches for two years as well as a hardened upgrade path to the next LTS release. Upgrading critical software is a balancing act: Action incurs engineering effort and risks, while inaction leaves vulnerabilities and defects open. Consul Enterprise LTS reduces both overhead and risk beyond the industry standard by providing critical fixes for an extra year without requiring major upgrades. For more information, refer to this blog post: Consul Enterprise Long-Term Support (LTS) improves operational efficiency. Fault injection for service mesh (Enterprise) Fault injection for service mesh enables organizations to explore and enhance their system resilience in microservice architectures. Teams can explore service behavior in response to problems with an upstream service by injecting faults without changing application code. For example, how does the ‘frontend’ service respond to latency from the ‘api’ service? Just configure the service mesh to cause the ‘api’ service to automatically add 3,000ms of latency to 100% of requests. The developers of the ‘frontend’ service can then iteratively modify and test their code to provide a better consumer experience when facing latency. Three fault types can be introduced to a specified percentage of HTTP or gRPC traffic to a service: Error code (e.g. 429 too many requests) Response latency (e.g. 5,000ms) Response rate limit (e.g. 1,000KiB/s) Faults can also be conditionally injected based on request header matching. Referencing the previous example, the service mesh could be configured to inject latency to ‘api’ service responses only when the X-FAULT-INJECTION-OPT-IN request header has the value true. Now, ‘frontend’ service developers can opt into latency in ‘api’ service responses by including that request header. Refer to the fault injection documentation for more information. Expanded runtime support Consul is designed to provide a unified solution across any cloud and any runtime, including: Virtual machines (VMs) and bare metal machines Kubernetes HashiCorp Nomad: A simple and flexible scheduler and orchestrator for managing containers and non-containerized applications Amazon ECS: Serverless container runtime AWS Lambda: Serverless function runtime Consul 1.18 includes several enhancements to the maturity of its Amazon ECS runtime adaptation: Amazon ECS: Transparent proxy support Transparent proxy mode is a feature available on some Consul runtimes (Kubernetes, VMs) that simplifies both: Security: All outbound traffic from, and inbound traffic to, a service must go through its local service mesh sidecar proxy. Therefore, the service mesh cannot be bypassed, ensuring enforcement of all policies — such as service-to-service authorization. Service onboarding: Services can reference their upstreams without needing to explicitly configure them in a Consul service definition. Consul 1.18 and Consul ECS 0.8 add support for transparent proxy mode for ECS on Amazon EC2 tasks. With transparent proxy mode enabled, all traffic to and from each application container will pass through the sidecar proxy container within the same task. Refer to the Consul ECS technical specifications and the EC2 with transparent proxy example deployment for more details. Amazon ECS: Expanded gateway support for mesh ingress and egress Consul service mesh provides built-in gateways for managing traffic coming into and out of the service mesh: API gateway for ingress traffic: Controls access from services outside the mesh into the mesh, including authorization, TLS settings, and traffic management. Terminating gateway for egress traffic: Controls access from services in the mesh to services outside the mesh, including authorization and TLS settings. Consul 1.18 and Consul ECS 0.8 add support for configuring API and terminating gateways as ECS tasks. Refer to the following deployments in the Consul ECS example repository for more details: API gateway on ECS example Terminating gateway on ECS example — with transparent proxy Terminating gateway on ECS example — with (m)TLS to the external service Next steps for HashiCorp Consul Our goal is for Consul to enable a consistent, enterprise-ready control plane to discover and securely connect any application. Consul 1.18 includes enhanced workflow management, reliability, and security for service networking. We are excited for users to try these new Consul updates and further expand their service discovery and service mesh implementations. Here’s how to get started: Learn more in the Consul documentation. Get started with Consul 1.18 on Kubernetes by installing the latest Helm chart, provided in the Consul Kubernetes documentation. For more information on Consul Enterprise LTS, refer to Consul Enterprise Long-Term Support (LTS) improves operational efficiency For more information on HashiCorp’s Long-Term Support policy, refer to HashiCorp Long-Term Support (LTS) releases Try Consul Enterprise by starting a free trial. View the full article
-
We are pleased to announce a Long-Term Support (LTS) release program for HashiCorp Consul Enterprise, starting with versions 1.15 and 1.18. Going forward, the first major release of each calendar year, typically in late February, will be an LTS release. The challenge: balancing operational overhead and risk Organizations often face a dilemma related to maintaining and updating mission-critical software. On one side is the cost of action. No matter how technically simple or reliable an upgrade is, all upgrades involve effort and risk. To minimize risk, major upgrades may need months to plan, test, approve, and deploy. Frequent upgrades may be too costly in terms of operational burden. And every major upgrade has some risk, no matter how much due diligence is performed. On the other side is the cost of inaction. All software has defects and security vulnerabilities that are discovered and fixed in future versions. Without upgrading, organizations remain susceptible to emerging issues that introduce risk to their business. How can an organization balance the costs of action versus inaction in upgrading mission critical software, such as HashiCorp Consul? The solution: Long-Term Support releases With Consul Enterprise LTS releases, organizations can reduce both operational overhead and risk. It enables organizations to receive critical fixes in minor releases without having to upgrade their major version more than once a year. Consul Enterprise is the first of several HashiCorp commercial products to offer LTS releases with the following key characteristics: Extended maintenance: Two years of critical fixes provided through minor releases Efficient upgrades: Support for direct upgrades from one LTS release to the next, reducing major version upgrade risk and improving operational efficiency Consul Enterprise LTS releases offer several key advantages compared to the industry standard and to standard Consul releases, as shown in this table: Characteristic Industry standard Consul Enterprise standard release Consul Enterprise LTS release Release lifetime 7 - 15 months 12 months 24 months Maximum upgrade jump +2 major versions +2 major versions +3 major versions (from one LTS to the next) Average time between major version upgrades 3 - 6 months 4 - 8 months 12 months Getting started with Consul Enterprise LTS LTS is available now to all Consul Enterprise customers with self-managed deployments. To upgrade your Consul Enterprise deployment to an LTS version (1.15 or 1.18), refer to Consul’s upgrade documentation. If you currently have Consul Enterprise 1.15 deployed, you’re already running a maintained LTS version — no further action is required at this time. Once you’re running a maintained version of Consul Enterprise LTS, HashiCorp recommends upgrading once a year to the next LTS version. This upgrade pattern ensures your organization is always operating a maintained release, minimizes major version upgrades, and maximizes predictability for your planning purposes. For more information, refer to the Consul Enterprise LTS documentation and to HashiCorp’s multi-product LTS statement. Next steps for HashiCorp Consul Get started with Consul through our many tutorials for both beginners and advanced users. Learn more about Consul Enterprise’s capabilities by starting a free trial. View the full article
-
As more customers use multiple cloud services or microservices, they face the difficulty of consistently managing and connecting their services across various environments, including on-premises, different clouds, and existing legacy systems. HashiCorp Consul's service mesh addresses this challenge by securely and consistently connecting applications on any runtime, network, cloud platform, or on-premises setup. In the Google Cloud ecosystem, Consul can be deployed across Google Kubernetes Engine (GKE) and Anthos GKE. Now, Consul 1.16 is also supported on GKE Autopilot, Google Cloud’s fully managed Kubernetes platform for containerized workloads. Consul 1.17 is currently on track to be supported on GKE Autopilot later this year. Benefits of GKE Autopilot In 2021, Google Cloud introduced GKE Autopilot, a streamlined configuration for Kubernetes that follows GKE best practices, with Google managing the cluster configuration. Reducing the complexity that comes with workloads using Kubernetes, Google’s GKE Autopilot simplifies operations by managing infrastructure, control plane, and nodes, while reducing operational and maintenance costs. Consul is the latest partner product to be generally available, fleet-wide, on GKE Autopilot. By deploying Consul on GKE Autopilot, customers can connect services and applications across clouds, platforms, and services while realizing the benefits of a simplified Kubernetes experience. The key benefits of using Autopilot include more time to focus on building your application, a strong security posture out-of-the-box, and reduced pricing — paying only for what you use: Focus on building and deploying your applications: With Autopilot, Google manages the infrastructure using best practices for GKE. Using Consul, customers can optimize operations through centralized management and automation, saving valuable time and resources for developers. Out-of-the-box security: With years of Kubernetes experience, GKE Autopilot implements GKE-hardening guidelines and security best practices, while blocking features deemed less safe (i.e. privileged pod- and host-level access). As a part of HashiCorp’s zero trust security solution, Consul enables least-privileged access by using identity-based authorization and service-to-service encryption. Pay-as-you-go: GKE Autopilot’s pricing model simplifies billing forecasts and attribution because it's based on resources requested by your pods. Visit the Google Cloud and HashiCorp websites to learn more about GKE Autopilot pricing and HashiCorp Consul pricing. Deploying Consul on GKE Autopilot Deploying Consul on GKE Autopilot facilitates service networking across a multi-cloud environment or microservices architecture, allowing customers to quickly and securely deploy and manage Kubernetes clusters. With Consul integrated across Google Cloud Kubernetes, including GKE, GKE Autopilot, and Anthos GKE, Consul helps bolster application resilience, increase uptime, accelerate application deployment, and improve security across service-to-service communications for clusters, while reducing overall operational load. Today, you can deploy Consul service mesh on GKE Autopilot using the following configuration for Helm in your values.yaml file: global: name: consul connectInject: enabled: true cni: enabled: true logLevel: info cniBinDir: "/home/kubernetes/bin" cniNetDir: "/etc/cni/net.d"In addition, if you are using a Consul API gateway for north-south traffic, you will need to configure the Helm chart so you can leverage the existing Kubernetes Gateway API resources provided by default when provisioning GKE Autopilot. We recommend the configuration shown below for most deployments on GKE Autopilot as it provides the greatest flexibility by allowing both API gateway and service mesh workflows. Refer to Install Consul on GKE Autopilot for more information. global: name: consul connectInject: enabled: true apiGateway: manageExternalCRDs: false manageNonStandardCRDs: true cni: enabled: true logLevel: info cniBinDir: "/home/kubernetes/bin" cniNetDir: "/etc/cni/net.d"Learn more You can learn more about the process that Google Cloud uses to support HashiCorp Consul workloads on GKE Autopilot clusters with this GKE documentation and resources page. Here’s how to get started on Consul: Learn more in the Consul documentation. Begin using Consul 1.16 by installing the latest Helm chart, and learn how to use a multi-port service in Consul on Kubernetes deployments. Try Consul Enterprise by starting a free trial. Sign up for HashiCorp-managed HCP Consul. View the full article
-
Today at HashiConf, we are introducing a number of significant enhancements for HashiCorp Consul, our service networking solution that helps users discover and securely connect any application. We're also formally introducing HCP Consul Central, previously known as the management plane for HCP Consul. These new capabilities help organizations enhance workflow management, increase reliability and scale, and bolster security for operators as they leverage a cloud operating model for service networking. Some of the notable updates include: Multi-port support (beta): a new, simplified way to onboard modern distributed applications that require different ports for various traffic types for intricate client-server communication Locality-aware service mesh routing within a Consul datacenter: optimizes traffic routing within datacenters, prioritizing local instances for lower latency and reduced costs. Sameness groups (GA): simplifies multi-cluster operations, enhancing service reliability for enterprises. HCP Consul Central: introduces observability features for HashiCorp-managed and linked self-managed clusters, enhancing cluster health monitoring. Additionally, a global API simplifies integration with HCP Consul Central, allowing platform operators to streamline workflows and access cluster details. View the full article
-
A GitOps tool like Argo CD can help centralize the automation, installation, and configuration of services onto multiple Kubernetes clusters. Rather than apply changes using a Kubernetes CLI or CI/CD, a GitOps workflow detects changes in version control and applies the changes automatically in the cluster. You can use a GitOps workflow to deploy and manage changes to a Consul cluster, while orchestrating the configuration of Consul service mesh for peering, network policy, and gateways. This approach to managing your Consul cluster and configuration has two benefits. First, a GitOps tool handles the order-of-operations and automation of cluster updates before configuration updates. Second, your Consul configuration uses version control as a source of truth that GitOps enforces across multiple Kubernetes clusters. This post demonstrates a GitOps workflow for deploying a Consul cluster, configuring its service mesh, and upgrading its server with Argo CD. Argo CD annotations for sync waves and resource hooks enable orchestration of Consul cluster deployment followed by service mesh configuration with Custom Resource Definitions (CRDs). Updating a Consul cluster on Kubernetes involves opening a pull request with changes to Helm chart values or CRDs and merging it. Argo CD synchronizes the configuration to match version control and handles the order of operations when applying the changes... View the full article
-
- kubernetes
- hashicorp
- (and 4 more)
-
HashiCorp Waypoint is an application release orchestrator that enables developers to deploy, manage, and observe their applications on any infrastructure platform, including HashiCorp Nomad, Kubernetes, or Amazon Elastic Container Service (Amazon ECS), with a few short commands. At HashiConf Global last month, many developers discussed their use of multiple services from HashiCorp and were interested in connecting Waypoint to the rest of the HashiCorp stack. In listening to this feedback, we wanted to highlight how to use Waypoint plugins for HashiCorp Terraform Cloud, Consul, and Vault to automate and simplify the application release process. By connecting to our other products (as well as many third-party tools), Waypoint optimizes an engineer’s workflow to enable teams to work faster. The plugins are available for HCP Waypoint (beta) and Waypoint open source. »Waypoint Plugins Help Save Time Typically, the infrastructure team needs to explicitly communicate configurations via GitHub (or some other method) to the application team as a part of the release process, which they copy into their code. With the respective plugins, application developers can use a fixed piece of configuration to grab specific parameters set by the infrastructure team. This action removes the back-and-forth between teams that may be a point of failure or miscommunication during the CI/CD process. For example, infrastructure engineers can create an Amazon ECS cluster using Terraform Cloud, and app teams can deploy it into that cluster without needing to copy-paste cluster names. For a closer look at how to pull information into Waypoint, check out these code examples: Waypoint’s Terraform Cloud Config Sourcer Variable on GitHub Waypoint Node.js example on GitHub HashiCorp plugins: Terraform Cloud Consul Vault »Automate Your Application Delivery Workflow with Waypoint Modern organizations often deploy applications to multiple cloud providers, an action that dramatically increases the complexity of releases. Multi-cloud or multi-platform environments force application developers to become familiar with those multiple platforms and the frequent, unexpected changes to them. When managed the traditional way via script in a continuous integration process, the pipeline is brittle. Application developers find themselves needing to rely heavily on infrastructure teams for routine tasks like checking application health, deploying a specific version, or getting logs. »Try HCP Waypoint with Your Team The goal of Waypoint is to remove this dependency by automating how application developers build, deploy, and release software to a wide variety of platforms. The Waypoint plugins for Terraform, Vault, and Consul further this aim of automation by pulling in configuration details without relying so heavily on the infrastructure team. No other application release platform offers these deep connections to the HashiCorp ecosystem tools and helps teams work faster and smarter. Just as important, Waypoint is a highly extensible platform that allows users to build their own plugins or use other plugins created by HashiCorp and our community. Over time we anticipate the number of Waypoint plugins will continue to grow. Try HCP Waypoint for free to get started. View the full article
-
- terraform
- hashicorp terraform
-
(and 4 more)
Tagged with:
-
We are pleased to announce that HashiCorp Consul on Amazon Elastic Container Service (ECS) 0.5 is now generally available. This release adds support for authenticating services and clients using AWS Identity and Access Management (IAM) identities. The new release also adds support for mesh gateways, which enable services to communicate across multiple runtimes and clouds and reduces risk for organizations by enforcing consistent end-to-end security for service communication. View the full article
-
Today at HashiConf Europe, we introduced a number of significant enhancements for HashiCorp Consul, our service networking solution that helps users discover and securely connect any application. These updates include HashiCorp Cloud Platform (HCP) Consul becoming generally available on Microsoft Azure, general availability of Consul API Gateway version 0.3, and tech previews of the upcoming Consul 1.13 and AWS Lambda support, scheduled for release later this year. Here’s a closer look at all three announcements… View the full article
-
We are excited to announce the public beta of HashiCorp Consul service mesh support for Amazon’s serverless functions service: AWS Lambda. This release will ensure service mesh users can now take advantage of consistent workflows and encrypted communications from all mesh services to all upstream workloads including Lambda functions. As organizations focus on getting to market faster, serverless adoption helps developers accelerate application development. Datadog’s State of Serverless Survey shows that AWS Lambda is leading the serverless landscape. However, effectively integrating AWS Lambda into a service mesh requires first-class support. Previously, other Lambda integrations bypassed the service mesh. This beta release addresses these limitations by extending Consul service mesh capabilities and secure communications to AWS Lambda in addition to existing support for Kubernetes, virtual machines, HashiCorp Nomad, and Amazon ECS... View the full article
-
- service mesh
- lambda
-
(and 2 more)
Tagged with:
-
We are pleased to announce the general availability of Consul-Terraform-Sync (CTS) 0.6. This release marks another step in the maturity of our larger Network Infrastructure Automation (NIA) solution. CTS combines the functionality of HashiCorp Terraform and HashiCorp Consul to eliminate manual ticket-based systems across on-premises and cloud environments. Its capabilities can be broken down into two parts: For Day 0 and Day 1, teams use Terraform to quickly deploy network devices and infrastructure in a consistent and reproducible manner. Once established, teams manage Day 2 networking tasks by integrating Consul’s catalog to register services into the system via CTS. Whenever a change is recorded to the service catalog, CTS triggers a Terraform run that uses partner ecosystem integrations to automate updates and deployments for load balancers, firewall policies, and other service-defined networking components. This post covers the evolution of CTS and highlights the new features in CTS 0.6… View the full article
-
- cts
- terraform cloud
-
(and 3 more)
Tagged with:
-
Since the launch of our HashiCorp Cloud Engineering Certifications in April, thousands of people have passed the Terraform Associate and Vault Associate exams. Starting today, you can become certified with our Consul Associate exam as a way to demonstrate proficiency in Network Automation. Study for and purchase the exam today... View the full article
-
We are pleased to announce that our first HashiCorp Cloud Platform (HCP) service — HCP Consul — is now in public beta. HCP Consul enables a team to provision HashiCorp-managed Consul clusters directly through the HCP portal and easily leverage Consul’s multi-platform service mesh capabilities within their Amazon EKS, ECS, and EC2 application environments. To learn more about HashiCorp Cloud Platform, please visit our web page. If you are new to HashiCorp Consul, please visit the Consul Learn documentation for an introduction. View the full article
-
We are pleased to announce the public beta availability of HashiCorp Consul 1.9. Consul is a multi-cloud service networking platform to discover, connect, and secure services across any runtime platform and public or private cloud... View the full article
-
Recently we announced that Nomad now supports running Consul Connect ingress gateways. For the past year, Nomad has been incrementally improving its first-class integration with Consul’s service mesh. Whether through the use of sidecar proxies like Envoy or by embedding the Connect native client library, Nomad supports running tasks that can communicate with other components of a Consul service mesh quickly and securely. Now with support for Consul ingress gateways, Nomad users are able to provide access to Connect-enabled services from outside the service mesh. Overview Ingress gateways enable ingress traffic from services running outside of the Consul service mesh to services inside the mesh. An ingress gateway is a special type of proxy that is registered into Consul as a service with its kind set to ingress-gateway. They provide a dedicated entry point for outside traffic and apply the proper traffic management policies for how requests to mesh services are handled. With this latest release, Nomad can now be used to not only deploy ingress gateway proxy tasks, but configure them too. Using this feature, Nomad job authors can enable applications external to the Consul service mesh to access those Connect-enabled services. The ingress gateway configuration enables defining one or more listeners that map to a set of backing services. Service Configuration There is a new gateway parameter available for services with a Connect stanza defined at the group level in a Nomad job specification. Inside this stanza are parameters for configuring the underlying Envoy proxy as well as the configuration entry that is used to establish the gateway configuration in Consul. service { gateway { proxy { // envoy proxy configuration // https://www.nomadproject.io/docs/job-specification/gateway#proxy-parameters } ingress { // consul configuration entry // https://www.nomadproject.io/docs/job-specification/gateway#ingress-parameters } } } The proxy stanza is used to define configuration regarding the underlying Envoy proxy that Nomad will run as a task in the same task group as the service definition. This configuration becomes part of the service registration for the service registered on behalf of the ingress gateway. The ingress stanza represents the ingress-gateway configuration entry that Consul uses to manage the proxy's listeners. A listener declares the port, protocol (tcp or http), and each Consul service which may respond to incoming requests. When listening on http, a service may be configured with a list of hosts that specify which requests will match the service. ingress { listener { port = 8080 protocol = "tcp" service { name = "uuid-api" } } } If the task group containing the ingress gateway definition is configured for bridge networking, Nomad will automatically reconfigure the proxy options to work from inside the group's network namespace for the defined listeners. E.g., envoy_gateway_no_default_bind = true envoy_gateway_bind_addresses "uuid-api" { address = "0.0.0.0" port = 8080 } Task Nomad and Consul leverage Envoy as the underlying proxy implementation for ingress gateways. The Nomad task group that defines the ingress service does not require any tasks to be defined - Nomad will derive the task from the service configuration and inject the task into the task group automatically during job creation. Discover To enable easier service discovery, Consul provides a new DNS subdomain for each service fronted by an ingress gateway. To find ingress-enabled services, use: <service>.ingress.<domain> By default, <domain> is simply consul. To test that an ingress-gateway is working, the dig command can be used to look up the DNS entry of a service from Consul directly, e.g. dig @127.0.0.1 -p 8600 <service>.ingress.consul SRV Example The following job specification demonstrates using an ingress gateway as a method of plain HTTP ingress for our UUID generator API Connect-native sample service. This open source example is designed to be runnable if Consul, Nomad, and Docker are already configured. job "ig-bridge-demo" { datacenters = ["dc1"] # This group will have a task providing the ingress gateway automatically # created by Nomad. The ingress gateway is based on the Envoy proxy being # managed by the docker driver. group "ingress-group" { network { mode = "bridge" # This example will enable tcp traffic to access the uuid-api connect # native example service by going through the ingress gateway on port 8080. # The communication between the ingress gateway and the upstream service occurs # through the mTLS protected service mesh. port "api" { static = 8080 to = 8080 } } service { name = "my-ingress-service" port = "8080" connect { gateway { # Consul gateway [envoy] proxy options. proxy { # The following options are automatically set by Nomad if not # explicitly configured when using bridge networking. # # envoy_gateway_no_default_bind = true # envoy_gateway_bind_addresses "uuid-api" { # address = "0.0.0.0" # port = <associated listener.port> # } # # Additional options are documented at # https://www.nomadproject.io/docs/job-specification/gateway#proxy-parameters } # Consul Ingress Gateway Configuration Entry. ingress { # Nomad will automatically manage the Configuration Entry in Consul # given the parameters in the ingress block. # # Additional options are documented at # https://www.nomadproject.io/docs/job-specification/gateway#ingress-parameters listener { port = 8080 protocol = "tcp" service { name = "uuid-api" } } } } } } } # The UUID generator from the Connect-native demo is used as an example service. # The ingress gateway above makes access to the service possible over tcp port 8080. group "generator" { network { mode = "host" port "api" {} } service { name = "uuid-api" port = "${NOMAD_PORT_api}" connect { native = true } } task "generate" { driver = "docker" config { image = "hashicorpnomad/uuid-api:v3" network_mode = "host" } env { BIND = "0.0.0.0" PORT = "${NOMAD_PORT_api}" } } } } You can run this example by saving it as ingress-gateway.nomad and running the commands, consul agent -dev sudo nomad agent -dev-connect nomad job run ingress-gateway.nomad Once running, the ingress gateway will be available on port 8080 of the node that the ingress gateway service is running on. The UUID generator service will be listening to a dynamically allocated port chosen by Nomad. Because the UUID generator service is in the Connect service mesh, it will not be possible to connect to it directly as it will reject any connection without a valid mTLS certificate. In most environments Consul DNS will be configured so that applications can easily discover Consul services. We can use curl and dig to simulate what an application accessing a service through the ingress gateway would look like. $ curl $(dig +short @127.0.0.1 -p 8600 uuid-api.ingress.dc1.consul. ANY):8080 c8bfae29-3683-4b19-89dd-fbfbe691a6e7 Limitations At the moment, Envoy is the only proxy implementation that can be used by Nomad and Consul as an ingress gateway. When being used as an ingress gateway, Nomad will launch Envoy using the docker task driver, as there is not yet support for manually specifying the proxy task. Ingress gateways are configured using Consul configuration entries, which are global in scope across federated Consul clusters. When multiple Nomad regions define an ingress gateway under a particular service name, each region will rewrite the ingress-gateway Configuration Entry in Consul for that service. In practice, typical individual ingress gateway service definitions would be the same across Nomad regions, causing the extra writes to turn into no-ops. When running the ingress gateway in host-networking mode, the Envoy proxy creates a default administration HTTP listener that is bound to localhost. There is no way to disable or secure the Envoy administration listener envoy/2763. Any other process able to connect to localhost on the host machine will be able to access the Envoy configuration through its administration listener, including Service Identity Tokens for the proxy when Consul ACLs are in use. Conclusion In this blog post, we shared an overview of Nomad's Consul ingress gateway integration and how it can be used to configure and run Consul ingress gateways on Nomad. Using this integration, job specification authors can easily create endpoints for external applications to make requests to Connect-enabled services that would otherwise be accessible only through the Consul service mesh. For more information about ingress gateways, see the Consul ingress gateway documentation. For more information about Nomad, please visit our docs View the full article
-
Today we’re pleased to announce the release of a suite of Terraform modules in the public registry that provide an implementation of the Reference Architecture for Consul, Nomad, and Vault. You can use these modules in AWS Cloud. They represent a straightforward way to stand up a working product cluster. If you navigate to the HashiCorp products section of the Terraform registry, scroll down and you'll see the "Modules Maintained By HashiCorp" section shown above. What Modules Are Available?This initial release contains modules for the open-source versions of Consul, Nomad, and Vault for AWS. This combination of products and platform was chosen in light of the fact that AWS is the cloud of choice across much of the industry, and these three products have broad practitioner support and adoption. What Are These Modules?These modules are opinionated implementations of the product reference architectures for Vault, Consul, and Nomad. You can drop them into existing Terraform set-ups or use them to compose entirely new infrastructure in Terraform. Each module is composed in such a way that you can get started quickly by supplying a few values as variables. Other than these values, which are specific to your environment, the module contains defaults that bring up the appropriate infrastructure for each product in accordance with the recommendations of the HashiCorp Enterprise Architecture group. For the inexperienced practitioner this means that the start time is greatly accelerated, spinning up the infrastructure allows you to get started with the product rather than having to learn and understand the details of configuration. This straightforward approach is also intended to help you experiment by making it simple to bring up a functional version of the product for demonstration purposes, or for internal benchmarking by an organization looking to make sure introducing HashiCorp products is not also introducing overhead. While full integration of a product into an existing infrastructure might require more specific configuration, these modules allow the swift set-up of test or development environments that adhere to HashiCorp best practices in the operation of our products. What About Flexibility?The HashiCorp way has always been to provide you with as much flexibility as possible; we make tools, not prescriptions. These modules don’t change that, rather they’re a new way of expressing it. If you are a more experienced practitioner who is looking for flexibility and the ability to control configuration in a more manual fashion, we still offer our previous modules for Consul, Nomad, and Vault. These new modules join our previous offerings in the registry, and are intended to function as quickstarts for new practitioners and as reference material accompanying our HashiCorp Learn site. What's Next?We believe that remixing and collaboration make us better, and that’s why we’ve invested in these open source modules. As the maintainers, we are sharing these in the hope that you will find them helpful whether as implementation tools, references, or templates. We also invite feedback and contribution, both on the modules already released and our future work in this regard. We especially hope new practitioners will find these modules helpful, and we’ll be working toward improving any rough edges in new practitioner experience in these, and the future modules we will release. View the full article
-
Forum Statistics
67.4k
Total Topics65.3k
Total Posts