Search the Community
Showing results for tags 'dns'.
-
DNS Attacks: Advanced Security In an increasingly interconnected digital world, the threat of DNS attacks looms large, posing a formidable challenge to the security of our digital assets. From disrupting online services to redirecting users to malicious websites, DNS attacks have become a prevalent tool in the arsenal of cybercriminals. In this age where every […] Source View the full article
-
Authors/Presenters: *Elsa Rodríguez, Radu Anghel, Simon Parkin, Michel van Eeten, and Carlos Gañán* Many thanks to USENIX for publishing their outstanding USENIX Security ’23 Presenter’s content, and the organizations strong commitment to Open Access. Originating from the conference’s events situated at the Anaheim Marriott; and via the organizations YouTube channel. Permalink The post USENIX Security ’23 – Two Sides Of The Shield: Understanding Protective DNS Adoption Factors appeared first on Security Boulevard. View the full article
-
Authors/Presenters: *Alexandra Nisenoff, Ranya Sharma and Nick Feamster* Many thanks to USENIX for publishing their outstanding USENIX Security ’23 Presenter’s content, and the organizations strong commitment to Open Access. Originating from the conference’s events situated at the Anaheim Marriott; and via the organizations YouTube channel. Permalink The post USENIX Security ’23 – User Awareness and Behaviors Concerning Encrypted DNS Settings in Web Browsers appeared first on Security Boulevard. View the full article
-
Google Kubernetes Engine (GKE) offers two different ways to perform service discovery and DNS resolution: the in-cluster kube-dns functionality, and GCP managed Cloud DNS. Either approach can be combined with the performance-enhancing NodeLocal DNSCache add-on. New GKE Autopilot clusters use Cloud DNS as a fully managed DNS solution for your GKE Autopilot clusters without any configuration required on your part. But for GKE Standard clusters, you have the following DNS provider choices: Kube-dns (default) Cloud DNS - configured for either cluster-scope or VPC scope, and Install and run your own DNS (like Core DNS) In this blog, we break down the differences between the DNS providers for your GKE Standard clusters, and guide you to the best solution for your specific situation. Kube-DNS kube-dns is the default DNS provider for Standard GKE clusters, providing DNS resolution for services and pods within the cluster. If you select this option, GKE deploys the necessary kube-dns components such as Kube-dns pods, Kube-dns-autoscaler, Kube-dns configmap and Kube-dns service in the kube-system namespace. kube-dns is the default DNS provider for GKE Standard clusters and the only DNS provider for Autopilot clusters running versions earlier than 1.25.9-gke.400 and 1.26.4-gke.500. Kube-dns is a suitable solution for workloads with moderate DNS query volumes that have stringent DNS resolution latency requirements (e.g. under ~2-4ms). Kube-dns is able to provide low latency DNS resolution for all DNS queries as all the DNS resolutions are performed within the cluster. If you notice DNS timeouts or failed DNS resolutions for bursty workload traffic patterns when using kube-dns, consider scaling the number of kube-dns pods, and enabling NodeLocal DNS cache for the cluster. You can scale the number of kube-dns pods beforehand using Kube-dns autoscaler, and manually tuning it to the cluster's DNS traffic patterns. Using kube-dns along with Nodelocal DNS cache (discussed below) also reduces overhead on the kube-dns pods for DNS resolution of external services. While scaling up kube-dns and using NodeLocal DNS Cache(NLD) helps in the short term, it does not guarantee reliable DNS resolution during sudden traffic spikes. Hence migrating to Cloud DNS provides a more robust and long-term solution for improved reliability of DNS resolution consistently across varying DNS query volumes. You can update the DNS provider for your existing GKE Standard from kube-dns to Cloud DNS without requiring to re-create your existing cluster. For logging the DNS queries when using kube-dns, there is manual effort required in creating a new kube-dns debug pod with log-queries enabled. Cloud DNS Cloud DNS is a Google-managed service that is designed for high scalability and availability. In addition, Cloud DNS elastically scales to adapt to your DNS query volume, providing consistent and reliable DNS query resolution regardless of traffic volume. Cloud DNS simplifies your operations and minimizes operational overhead since it is a Google managed service and does not require you to maintain any additional infrastructure. Cloud DNS supports dns resolutions across the entire VPC, which is something not currently possible with kube-dns. Also, while using Multi Cluster Services (MCS) in GKE, Cloud DNS provides DNS resolution for services across your fleet of clusters. Unlike kube-dns, Google Cloud’s hosted DNS service Cloud DNS provides Pod and Service DNS resolution that auto-scales and offers a 100% service-level agreement, reducing DNS timeouts and providing consistent DNS resolution latency for heavy DNS workloads. Cloud DNS also integrates with Cloud Monitoring, giving you greater visibility into DNS queries for enhanced troubleshooting and analysis. The Cloud DNS controller automatically provisions DNS records for pods and services in Cloud DNS for ClusterIP, headless and external name services. You can configure Cloud DNS to provide GKE DNS resolution in either VPC or Cluster (the default) scope. With VPC scope, the DNS records are resolvable with the entire VPC. This is achieved with the private DNS zone that gets created automatically. With Cluster scope, the DNS records are resolvable only within the cluster. While Cloud DNS offers enhanced features, it does come with usage-based costs. You save on compute costs and overhead by removing kube-dns pods when using Cloud DNS. Considering the typical cluster size workload traffic patterns, Cloud DNS is usually more cost effective than running kube-dns You can migrate clusters from kube-dns to Cloud DNS cluster scope without downtime or changes to your applications. The reverse (migrating from Cloud DNS to kube-dns) is not a seamless operation. NodeLocal DNSCache NodeLocal DNSCache is a GKE add-on that you can run in addition to kube-dns and Cloud DNS. The node-local-dns pod gets deployed on the GKE nodes after the option has been enabled (subject to a node upgrade procedure). Nodelocal DNS Cache (NLD) helps to reduce the average DNS resolution times by resolving the DNS requests locally on the same nodes as the pods, and only forwards requests that it cannot resolve to the other DNS servers in the cluster. This is a great fit for clusters that have heavy internal DNS query loads. Enable NLD during maintenance windows. Please note that node pools must be re-created for this change to take effect. Final thoughts The choice of DNS provider for your GKE Standard cluster has implications for the performance and reliability, in addition to your operations and overall service discovery architecture. Hence, it is crucial for GKE Standard users to understand their DNS options taking into account their application and architecture objectives. Standard GKE clusters allow you to use either kube-dns or Cloud DNS as your DNS provider, allowing you to optimize for either low latency DNS resolution or a simple, scalable and reliable DNS solution for GKE Standard clusters. You can learn more about DNS for your GKE cluster from the GKE documentation . If you have any further questions, feel free to contact us. We thank the Google Cloud team member who contributed to the blog: Selin Goksu, Technical Solutions Developer, Google View the full article
-
Disk Station Manager v7 (DSM 7) is the operating system of Synology NAS devices. You can configure the Let’s Encrypt SSL certificates for your Synology NAS from the DSM 7 web interface. By default, Synology DSM 7 uses the HTTP-01 challenge to verify the ownership of the domain (that you want to use for your Synology NAS) and issue an SSL certificate for the domain. But the HTTP-01 challenge won’t work unless you have a public IP address and your computer is accessible from the internet. So, if you want to use the Let’s Encrypt SSL certificates for your home network or private network, you have to use the DNS-01 challenge instead. When the DNS-01 challenge is used, Let’s Encrypt verifies the ownership of the domain using the DNS server of the domain. So, it works for private networks as well. Sadly, the Synology DSM 7 web interface does not provide any way of obtaining the Let’s Encrypt SSL certificates using the DNS-01 challenge. Luckily, the “acme.sh” program can be installed on your Synology NAS and is used to generate and renew the Let’s Encrypt SSL certificates using the DNS-01 challenge. In this article, we will show you the following: How to install “sh” on your Synology NAS How to use “acme.sh” to generate a Let’s Encrypt SSL certificate (via the DNS-01 challenge) for the domain name that you’re using on your Synology NAS How to install the “acme.sh” generated Let’s Encrypt SSL certificate on your Synology NAS How to configure the DSM 7 operating system of your Synology NAS to use the generated Let’s Encrypt SSL certificate How to configure your Synology NAS to automatically renew the generated Let’s Encrypt SSL certificates using “acme.sh” NOTE: In this article, we will use the CloudFlare DNS server for demonstration. You can use other DNS services that are supported by acme.sh as well. All you have to do is make the necessary adjustments. Topic of Contents: Creating a Certadmin User on Synology NAS Configuring the CloudFlare DNS Server for LetsEncrypt DNS-01 Challenge Configuring Other DNS Services for LetsEncrypt DNS-01 Challenge Accessing the Synology NAS Terminal via SSH Downloading Acme.sh on Your Synology NAS Installing Acme.sh on Your Synology NAS Generating a Let’s Encrypt SSL Certificate Using Acme.sh for Your Synology NAS Installing the Let’s Encrypt SSL Certificate on Your Synology NAS Using Acme.sh Setting the Let’s Encrypt SSL Certificate as Default on Your Synology NAS Configure Synology NAS to Auto Renew a Let’s Encrypt SSL Certificate Using Acme.sh Conclusion References Creating a Certadmin User on Synology NAS First, you should create a new admin user on your Synology NAS to generate and renew the Let’s Encrypt SSL certificates. To create a new admin user on Synology NAS, click on Control Panel[1] > User & Group[2] from the DSM 7 web interface. Click on “Create” from the “User” tab. Type in “certadmin” as the user name[1], an optional short description for the user[2], the user login password[3], and click on “Next”[4]. To create an admin user, tick the “administrators” group from the list[1] and click on “Next”[2]. Click on “Next”. Click on “Next”. Click on “Next”. Click on “Next”. Click on “Done”. The certadmin user should now be created on your Synology NAS. Configuring the CloudFlare DNS Server for Let’s Encrypt DNS-01 Challenge To use the CloudFlare DNS server for the Let’s Encrypt DNS-01 challenge, you need to generate a CloudFlare DNS token. You can generate a CloudFlare DNS server token from the CloudFlare dashboard. For more information, read this article. Configuring Other DNS Services for Let’s Encrypt DNS-01 Challenge “Acme.sh” supports other DNS services. If you don’t want to use the CloudFlare DNS, you can use any one of the “acme.sh” supported DNS services. The configuration is a little bit different for different DNS services. For more information, check the “acme.sh” DNS API guide. Accessing the Synology NAS Terminal via SSH To install “acme.sh” and generate and install a Let’s Encrypt SSL certificate on your Synology NAS, you need to access the Terminal of your Synology NAS. For more information on enabling the SSH access on your Synology NAS and accessing the Terminal of your Synology NAS, read this article. Once you enabled the SSH access on your Synology NAS, open a terminal app on your computer and run the following command: $ ssh certadmin@<ip-domain-of-your-synology-nas> You will be asked to type in the login password of the certadmin user. Type in the login password of the certadmin user of your Synology NAS and press on <Enter>. You should be logged in to your Synology NAS as the certadmin user. Downloading Acme.sh on Your Synology NAS To download the latest version of the “acme.sh” client, run the following command: $ wget -O /tmp/acme.sh.zip https://github.com/acmesh-official/acme.sh/archive/master.zip The latest version of the “acme.sh” client archive “acme.sh.zip” should be downloaded in the “/tmp” directory of your Synology NAS. Installing Acme.sh on Your Synology NAS To extract the “/tmp/acme.sh.zip” archive in the “/usr/local/share” directory of your Synology NAS, run the following command and type in the login password of the certadmin user and press <Enter> when prompted for the password. The “/tmp/acme.sh.zip” archive should be extracted in the “/usr/local/share/acme.sh-master” directory. $ sudo 7z x -o /usr/local/share /tmp/acme.sh.zip For simplicity, rename the “acme.sh-master” directory to just “acme.sh” with the following command: $ sudo mv -v /usr/local/share/acme.sh-master /usr/local/share/acme.sh To make the certadmin user owner of the “/usr/local/share/acme.sh” directory and its contents, run the following command: $ sudo chown -Rfv certadmin /usr/local/share/acme.sh Generating a Let’s Encrypt SSL Certificate Using Acme.sh for Your Synology NAS To generate a Let’s Encrypt SSL certificate for the domain name that you’re using on your Synology NAS, navigate to the “/usr/local/share/acme.sh” directory as follows: $ cd /usr/local/share/acme.sh Now, you need to export the required DNS API token environment variables. We use the CloudFlare DNS to manage the domain name that we are using on our Synology NAS. So, for us, all we have to do is export the CF_Token environment variable with the value of the CloudFlare DNS API token. If you’re using some other DNS service, check the “acme.sh” DNS API documentation for the variables that you need to export for “acme.sh” to work with your DNS service. $ export CF_Token="<CloudFlare DNS API Token>" Also, export the required Synology environment variables so that “acme.sh” can install the generated SSL certificates on your Synology NAS. $ export SYNO_Username="certadmin" $ export SYNO_Password="Your_certadmin_login_Password" $ export SYNO_Certificate="Let’s Encrypt" $ export SYNO_Create=1 To generate a Let’s Encrypt SSL certificate for the “*.nodekite.com” (wildcard) domain name using the CloudFlare DNS plugin (–dns dns_cf), run the following command: $ ./acme.sh --server letsencrypt --issue --dns dns_cf --home $PWD -d "*.nodekite.com" NOTE: If you’re using other DNS services, you need to change the DNS plugin (–dns <dns-plugin-name>) in the previous command accordingly. For more information, check the “acme.sh” DNS API documentation. A Let’s Encrypt SSL certificate is being generated. It takes a while to complete. At this point, the Let’s Encrypt SSL certificate should be generated. Installing the Let’s Encrypt SSL Certificate on Your Synology NAS Using Acme.sh Once the Let’s Encrypt SSL certificate is generated for the domain name (*.nodekite.com in this case) of your Synology NAS, you can install it on your Synology NAS with the following command: $ ./acme.sh -d "*.nodekite.com" --deploy --deploy-hook synology_dsm --home $PWD If you have a two-factor authentication enabled for the certadmin user, you will receive an OTP code. You have to type in the OTP code and press <Enter>. If you don’t have the two-factor authentication enabled for the certadmin user, leave it empty and press <Enter>. Press <Enter>. The generated Let’s Encrypt SSL certificate should be installed on your Synology NAS. Once the Let’s Encrypt SSL certificate is installed on your Synology NAS, it will be displayed on the Control Panel > Security > Certificate section of the DSM 7 web interface of your Synology NAS. Setting the Let’s Encrypt SSL Certificate as Default on Your Synology NAS To manage the SSL certificates of your Synology NAS, navigate to the Control Panel > Security > Certificate section from the DSM 7 web interface of your Synology NAS. To set the newly installed Let’s Encrypt SSL certificate as the default so that the newly installed web services on your Synology NAS will use it by default, select the Let’s Encrypt SSL certificate and click on Action > Edit. Tick on “Set as default certificate”[1] and click on “OK”[2]. The Let’s Encrypt SSL certificate should be set as the default certificate for your Synology NAS. To configure the existing web services of your Synology NAS to use the Let’s Encrypt SSL certificate, click on “Settings”. As you can see, all the web services are using the Synology self-signed SSL certificate. To change the SSL certificate for a web service, click on the respective drop-down menu from the right. Then, select the Let’s Encrypt SSL certificate that you want to use for the web service from the drop-down menu. In the same way, select the Let’s Encrypt SSL certificate for all the installed web services of your Synology NAS and click on “OK”. Click on “Yes”. The changes are being applied. It takes a few seconds to complete. Once the Let’s Encrypt SSL certificate is applied to all the web services of your Synology NAS, refresh the web page and your DSM 7 web interface should use the Let’s Encrypt SSL certificate. Configuring Synology NAS to Auto Renew the Let’s Encrypt SSL Certificate Using Acme.sh To configure your Synology NAS to automatically renew the Let’s Encrypt SSL certificate, navigate to Control Panel > Task Scheduler from the DSM 7 web interface. From the Task Scheduler, click on Create > Scheduled Task > User-defined script. From the “General” tab, type in “Renew SSL Certs” in the “Task” section[1] and select “certadmin” from the “User” dropdown menu[2]. From the “Schedule” tab, select “Run on the following date”[1] and select “Repeat monthly” from the “Repeat” dropdown menu[2]. Navigate to the “Task Settings” tab, type in the following command in the “User-defined script” section [1], and click on “OK”[2]. /usr/local/share/acme.sh/acme.sh –renew –server letsencrypt -d “*.nodekite.com” –home /usr/local/share/acme.sh A new task should be created. The “Renew SSL Certs” task will run every month and make sure that the Let’s Encrypt SSL certificate is renewed before it expires. Conclusion In this article, we showed you how to install and use the “acme.sh” ACME client to generate a Let’s Encrypt SSL certificate via the DNS-01 challenge on your Synology NAS. We also showed you how to install the generated Let’s Encrypt SSL certificate on your Synology NAS and configure the web services of your Synology NAS to use it. Finally, we showed you how to configure a scheduled task on your Synology NAS to automatically renew the Let’s Encrypt SSL certificate before it expires. References: Synology DSM 7 with Lets Encrypt and DNS Challenge Automatically renew Let’s Encrypt certificates on Synology NAS using DNS-01 challenge acmesh-official/acme.sh: A pure Unix shell script implementing ACME client protocol View the full article
-
- ssl
- letsencrypt
-
(and 8 more)
Tagged with:
-
Google Cloud DNS is a scalable, reliable, and managed authoritative Domain Name System (DNS) service running on the same infrastructure as Google. It provides a way for you to manage your DNS records using Google’s infrastructure for production-quality, high-volume DNS services. Google Cloud DNS is programmable, allowing you to easily publish and manage millions of DNS zones and records using Google’s API. Here’s a detailed guide on how to get started with Google Cloud DNS and perform common DNS management tasks. Setting Up Google Cloud DNS Create a project in the Google Cloud Console: If you don’t already have a Google Cloud account, you’ll need to sign up and create a project in the Google Cloud Console. Enable the Cloud DNS API for your project: Navigate to the “APIs & Services” dashboard and click on “Enable APIs and Services”. Search for “Cloud DNS” and enable it for your project. Set up billing: Google Cloud DNS requires an active billing account. Make sure you’ve set this up to proceed. Creating a DNS Zone Navigate to the Cloud DNS section: In the Google Cloud Console, go to the “Network services” section and select “Cloud DNS”. Create a DNS zone: Click on “Create zone”. You will be prompted to enter details for the new DNS zone. Zone type: Choose “Public” for zones accessible from the internet, or “Private” for zones that can be resolved only by specific resources in your Google Cloud projects. Zone name: A unique name for your DNS zone. DNS name: The DNS name of your zone, typically your domain name. Fill in any other necessary information and click “Create”. Managing DNS Records Add DNS records: Inside your newly created DNS zone, you can add DNS records by clicking on “Add record set”. Select the record type (A, AAAA, CNAME, MX, etc.), TTL (Time to Live), and enter the appropriate data for your record. Modify DNS records: To modify an existing record, click on the record set you wish to change. Make your modifications and save the changes. Delete DNS records: To delete a record, select the record set and click on the “Delete” button. Configuring DNS Security DNSSEC (Domain Name System Security Extensions): Google Cloud DNS supports DNSSEC, which adds an additional layer of security by providing cryptographic signatures for DNS data. To enable DNSSEC, navigate to the details page of your DNS zone and click on “DNSSEC”. Follow the prompts to enable DNSSEC. Remember, you’ll also need to configure DNSSEC with your domain registrar. Advanced Features Managing zones and records via the gcloud CLI: Google Cloud DNS can be managed using the gcloud command-line tool, offering a way to script and automate DNS tasks. For example, to create a DNS zone via the CLI: gcloud dns managed-zones create --dns-name="example.com." --description="A description" my-zone To add a DNS record: gcloud dns record-sets transaction start --zone="my-zone" followed by gcloud dns record-sets transaction add --name="www.example.com." --type=A --ttl=300 "1.2.3.4" --zone="my-zone" and gcloud dns record-sets transaction execute --zone="my-zone" Here’s how to find detailed tutorials on Google Cloud DNS: Official Google Documentation: Quickstart: Set up DNS records for a domain name with Cloud DNS: This guide provides a step-by-step process to create a managed public zone, configure records, and update your domain name servers https://cloud.google.com/dns Tutorial: Set up a domain by using Cloud DNS: This tutorial expands on the quickstart, demonstrating how to use Cloud DNS with a domain registered through a different provider https://cloud.google.com/dns/docs/zones Additional Resources: Google Cloud DNS Cheat Sheet by Tutorials Dojo: This cheat sheet offers a concise overview of Cloud DNS features, record types, and functionalities https://tutorialsdojo.com/latest-news/ How to Use Cloud DNS to Expose Your Web Page to Internet by GeeksforGeeks: This guide walks you through a practical example of setting up Cloud DNS to point your domain to a web server hosted on a Compute Engine virtual machine https://www.geeksforgeeks.org/google-cloud-dns/ Setting up a domain using Cloud DNS [YouTube video]: This video tutorial demonstrates the process of creating a Cloud DNS zone and configuring records to point your domain to a website [YouTube video on setting up a domain using Cloud DNS] Google Cloud DNS policies Google Cloud DNS offers a specific policy type called DNS server policies. These policies manage how DNS queries are forwarded within your Virtual Private Cloud (VPC) network. Here’s a breakdown of Cloud DNS server policies: What they are: A configuration for a VPC network that specifies inbound or outbound DNS forwarding, or both. Key Points: One policy applies to a single VPC network. Policies define how DNS queries are forwarded within the VPC. Two types exist: Inbound server policy: Allows incoming DNS queries from resources within the VPC to be forwarded to specific DNS servers. Outbound server policy: (One possible method for outbound forwarding) Specifies which DNS servers VMs in the VPC should use for resolving external DNS queries. Benefits: Centralized control over DNS forwarding within your VPC. Improved security by directing queries to internal or approved external DNS servers. Learning Resources: DNS server policies | Google Cloud: https://cloud.google.com/dns/docs/policies – This is the official Google Cloud documentation that provides a detailed explanation of server policies, including how to create and manage them. Google Cloud DNS policies provide a powerful mechanism for managing how DNS queries are answered by your Google Cloud DNS managed zones. These policies allow you to configure various aspects of DNS behavior, such as load balancing, failover, geolocation-based routing, and more. By applying DNS policies, you can improve the reliability, performance, and relevance of the responses provided to your users based on their geographic location or other criteria. Key Features of Google Cloud DNS Policies Load Balancing: Distribute traffic evenly across multiple resources, such as web servers in different geographic locations, to ensure high availability and reliability of your applications. Failover: Automatically reroute traffic from unhealthy resources to healthy ones in case of failure, minimizing downtime and maintaining service availability. Geolocation Routing: Route users to different resources based on their geographic location, which can help reduce latency by directing users to the closest available server. DNSSEC: Secure your DNS traffic with DNSSEC (DNS Security Extensions) to prevent attackers from tampering with DNS queries. Logging and Monitoring: Integration with Cloud Monitoring and Cloud Logging for insight into the operation of your DNS infrastructure and to track various metrics such as query volumes, response times, and DNSSEC validation outcomes. Creating and Managing DNS Policies Access the Cloud DNS Page: In the Google Cloud Console, navigate to the “Network Services” section and select “Cloud DNS.” Create a DNS Policy: Click on “DNS policies” and then “Create policy.” You’ll be prompted to configure the policy settings, such as: Name: Provide a unique name for the policy. Networks: Select the VPC networks where the policy will apply. Alternative Name Server Config: Specify alternative name servers if you’re setting up a custom DNS architecture. Logging: Enable or disable query logging for the policy. DNSSEC: Configure DNSSEC settings if you want to secure your DNS queries. Configure Policy Rules: Depending on the features you want to use (e.g., geolocation routing), you may need to define specific rules within your policy. This could involve specifying the geographic locations and the corresponding resources to route traffic to. Apply the Policy: After configuring the policy and its rules, save and apply it to your selected networks. Changes can take a few minutes to propagate. Monitoring and Logging: With logging enabled, you can monitor the performance and health of your DNS configurations through Cloud Monitoring and Cloud Logging. This can provide valuable insights for troubleshooting and optimizing your DNS setup. Guide to Google Cloud DNS policies for Add Weighted round robin routing policies Overview Weighted routing in DNS allows you to distribute traffic across multiple resources, such as servers or load balancers, based on assigned weights. This is particularly useful for load balancing, A/B testing, and gradual rollouts of new services or features. Step-by-Step Guide for Configuring Weighted Round Robin in Google Cloud DNS Step 1: Access Google Cloud DNS Sign in to your Google Cloud Console. Navigate to “Network Services” > “Cloud DNS”. Step 2: Select Your DNS Zone Choose the DNS zone where you want to apply weighted round robin routing. If you haven’t created a DNS zone yet, click on “Create Zone” and follow the prompts to set one up. Step 3: Add or Edit DNS Records To add a new DNS record, click on “Add record set”. To edit an existing record, click on the record set you wish to configure. Step 4: Configure the Weighted Round Robin DNS Record Type: Choose the type of DNS record you’re configuring (e.g., A, AAAA, CNAME). DNS Name: Specify the DNS name for which you’re configuring the routing (e.g., www.example.com). Resource Record Data: Enter the IP address (for A or AAAA records) or hostname (for CNAME records) of the target resource. TTL (Time to Live): Specify the TTL value. This determines how long DNS resolvers are allowed to cache the record. Weight: Enter the weight for this resource record. The weight must be a non-negative number from 0.0 to 1000.0. The traffic routed to each target is calculated based on the ratio of an individual weight to the total sum of all weights for records under the same DNS name. Step 5: Save the Record Set After configuring the weight and other settings for your record, save the record set. Repeat the process for each target resource you want to include in the weighted round robin configuration, ensuring each has its unique weight. Step 6: Verify Configuration After setting up your weighted records, it’s essential to verify that traffic distribution behaves as expected. Keep in mind that DNS changes might take some time to propagate, depending on the TTL values and DNS caching behavior. If you want to distribute traffic between two load balancers with a ratio of 10% to one and 90% to the other, you need to set the weights in a way that the ratio of the weights reflects this distribution. Given that the weight can be any non-negative number from 0.0 to 1000.0 and the distribution ratio between the two targets is based on the proportion of their weights relative to the total weight sum, you can choose weights that simplify the calculation and clearly represent this ratio. Option 1: Direct Percentage Representation Load Balancer 1 (10% of traffic): Set the weight to 100.0 (representing 10%). Load Balancer 2 (90% of traffic): Set the weight to 900.0 (representing 90%). Option 2: Simplified Representation Alternatively, you can use a simplified ratio that still maintains the 10:90 distribution: Load Balancer 1 (10% of traffic): Set the weight to 10.0. Load Balancer 2 (90% of traffic): Set the weight to 90.0. Both options achieve the same traffic distribution ratio. The choice between them may depend on whether you plan to adjust these weights frequently or add more resources into the mix in the future. Using smaller numbers (like 10 and 90) makes the calculation easier and more straightforward, especially when adjusting weights or adding more targets later. How It Works: The traffic distribution is calculated based on the weight of each record relative to the total sum of weights for all records under the same DNS name. In the first option, the total weight is 100.0 + 900.0 = 1000.0. The first load balancer gets 100.0 / 1000.0 = 10% of the traffic, and the second gets 900.0 / 1000.0 = 90%. In the second option, the total weight is 10.0 + 90.0 = 100.0. The distribution ratio remains the same: 10.0 / 100.0 = 10% for the first and 90.0 / 100.0 = 90% for the second. The post Google Cloud DNS Tutorials appeared first on DevOpsSchool.com. View the full article
-
In Cloudera deployments on public cloud, one of the key configuration elements is the DNS. Get it wrong and your deployment may become wholly unusable with users unable to access and use the Cloudera data services. If the DNS is set up less ideal than it could be, connectivity and performance issues may arise. In this blog, we’ll take you through our tried and tested best practices for setting up your DNS for use with Cloudera on Azure. To get started and give you a feel for the dependencies for the DNS, in an Azure deployment for Cloudera, these are the Azure managed services being used: AKS cluster: data warehouse, data engineering, machine learning, and Data flow MySQL database: data engineering Storage account: all services Azure database for PostgreSQL DB: data lake and data hub clusters Key vault: all services Typical customer governance restrictions and the impact Most Azure users use private networks with a firewall as egress control. Most users have restrictions on firewalls for wildcard rules. Cloudera resources are created on the fly, which means wildcard rules may be declined by the security team. Most Azure users use hub-spoke network topology. DNS servers are usually deployed in the hub virtual network or an on-prem data center instead of in the Cloudera VNET. That means if DNS is not configured correctly, the deployment will fail. Most Cloudera customers deploying on Azure allow the use of service endpoints; there is a smaller set of organizations that do not allow the use of service endpoints. Service endpoint is a simpler implementation to allow resources on a private network to access managed services on Azure Cloud. If service endpoints are not allowed, firewall and private endpoints will be the other two options. Most cloud users do not like opening firewall rules because that will introduce the risk of exposing private data on the internet. That leaves private endpoints the only option, which will also introduce additional DNS configuration for the private endpoints. Connectivity from private network to Azure managed services Firewall to Internet Route from firewall to Azure managed service endpoint on the internet directly. Service endpoint Azure provides service endpoints for resources on private networks to access the managed services on the internet without going through the firewall. That can be configured at a subnet level. Since Cloudera resources are deployed in different subnets, this configuration must be enabled on all subnets. The DNS records of the managed services using service endpoints will be on the internet and managed by Microsoft. The IP address of this service will be a public IP, and routable from the subnet. Please refer to the Microsoft documentation for detail. Not all managed services support services endpoint. In a Cloudera deployment scenario, only storage accounts, PostgreSQL DB, and Key Vault support service endpoints. Fortunately, most users allow service endpoints. If a customer doesn’t allow service endpoints, they have to go with a private endpoint, which is similar to what needs to be configured in the following content. Private Endpoint There is a network interface with a private IP address created with a private endpoint, and there is a private link service associated with a specific network interface, so that other resources in the private network can access this service through the private network IP address. The key here is for the private resources to find a DNS resolve for that private IP address. There are two options to store the DNS record: Azure managed public DNS zones will always be there, but they store different types of IP addresses for the private endpoint. For example: Storage account private endpoint—the public DNS zone stores the public IP address of that service. AKS API server private endpoint—the public DNS zone stores the private IP of that service. Azure Private DNS zone: The DNS records will be synchronized to the Azure Default DNS of LINKED VNET. Private endpoint is eligible to all Azure managed services that are used in Cloudera deployments. As a consequence, for storage accounts, users either use service endpoints or private endpoints. Because the public DNS zone will always return a public IP, the private DNS zone becomes a mandatory configuration. For AKS, these two DNS alternatives are both suitable. The challenges of private DNS zones will be discussed next. Challenges of private DNS zone on Azure private network Important Assumptions As mentioned above for the typical scenario, most Azure users are using a hub-and-spoke network architecture, and deploy custom private DNS on hub VNET. The DNS records will be synchronized to Azure default DNS of linked VNET. Simple Architecture Use Cases One VNET scenario with private DNS zone: When a private endpoint is created, Cloudera on Azure will register the private endpoint to the private DNS zone. The DNS record will be synchronized to Azure Default DNS of linked VNET. If users use custom private DNS, they can configure conditional forward to Azure Default DNS for the domain suffix of the FQDN. Hub-and-spoke VNET with Azure default DNS: With hub-spoke VNET and Azure default DNS, that is still acceptable. The only problem is that the resources on the un-linked VNET will not be able to access the AKS. But since AKS is used by Cloudera, that does not pose any major issues. The Challenge Part The most popular network architecture among Azure consumers is hub-spoke network with custom private DNS servers deployed either on hub-VNET or on-premises network. Since DNS records are not synchronized to the Azure Default DNS of the hub VNET, the custom private DNS server cannot find the DNS record for the private endpoint. And because the Cloudera VNET is using the custom private DNS server on hub VNET, the Cloudera resources on Cloudera VNET will go to a custom private DNS server for DNS resolution of the FQDN of the private endpoint. The provisioning will fail. With the DNS server deployed in the on-prem network, there isn’t Azure default DNS associated with the on-prem network, so the DNS server couldn’t find the DNS record of the FQDN of the private endpoint. Configuration best practices Against the background Option 1: Disable Private DNS Zone Use Azure managed public DNS zone instead of a private DNS zone. For data warehouse: create data warehouses through the Cloudera command line interface with the parameter “privateDNSZoneAKS”: set to”None.” For Liftie-based data services: the entitlement “LIFTIE_AKS_DISABLE_PRIVATE_DNS_ZONE” must be set. Customers can request this entitlement to be set either through a JIRA ticket or have their Cloudera solution engineer to make the request on their behalf. The sole drawback of this option is that it does not apply to data engineering, since that data service will create and use a MySQL private DNS zone on the fly. There is at present no option to disable private DNS zones for data engineering. Option 2: Pre-create Private DNS Zones Pre-create private DNS zones and link both Cloudera and hub VNETs to them. The advantage of this approach is that both data warehouse and Liftie-based data services support pre-created private DNS zones. There are however also a few drawbacks: For Liftie, the private DNS zone needs to be configured when registering the environment. Once past the environment registration stage, it cannot be configured. DE will need a private DNS zone for MySQL and it doesn’t support pre-configured private DNS zones. On-premises networks can’t be linked to a private DNS zone. If the DNS server is on an on-prem network, there are no workable solutions. Option 3: Create DNS Server as a Forwarder. Create a couple of DNS servers (for HA consideration) with load balancer in Cloudera VNET, and configure conditional forward to Azure Default DNS of the Cloudera VNET. Configure conditional forward from the company custom private DNS server to the DNS server in the Cloudera subnet. The drawback of this option is that additional DNS servers are required, which leads to additional administration overhead for the DNS team. Option 4: Azure-Managed DNS Resolve Create a dedicated /28 subnet in Cloudera VNET for Azure private DNS resolver inbound endpoint. Configure conditional forward from custom private DNS to the Azure private DNS resolver inbound endpoint. Summary Bringing all things together, consider these best practices for setting up your DNS with Cloudera on Azure: For the storage account, key vault, postgres DB Use service endpoints as the first choice. If service endpoint is not allowed, pre-create private DNS zones and link to the VNET where the DNS server is deployed. Configure conditional forwards from custom private DNS to Azure default DNS. If the custom private DNS is deployed in the on-premises network, use Azure DNS resolver or another DNS server as DNS forwarder on the Cloudera VNET. Conditional forward the DNS lookup from the private DNS to the resolver endpoint. For the data warehouse, DataFlow, or machine learning data services Disable the private DNS zone and use the public DNS zone instead. For the data engineering data service Configure the Azure DNS resolver or another DNS server as a DNS forwarder on the Cloudera VNET. Conditional forward the DNS lookup from the private DNS to the resolver endpoint. Please refer to Microsoft documentation for the details of setting up an Azure DNS Private Resolver. For more background reading on network and DNS specifics for Azure, have a look at our documentation for the various data services: DataFlow, Data Engineering, Data Warehouse, and Machine Learning. We’re also happy to discuss your specific needs; in that case please reach out to your Cloudera account manager or get in touch. The post DNS Zone Setup Best Practices on Azure appeared first on Cloudera Blog. View the full article
-
- dns
- best practices
-
(and 1 more)
Tagged with:
-
You can now resolve the private Kubernetes API server endpoint of your Amazon Elastic Kubernetes Service (EKS) cluster in AWS GovCloud (US) regions. This allows you to easily connect to an EKS cluster that is only accessible within a VPC, including when using AWS services such as AWS Direct Connect and VPC peering. View the full article
-
Today, AWS announced the launch of IP-based routing for Amazon Route 53, AWS’s Domain Name System (DNS) cloud service. Route 53 provides customers with multiple routing options, such as geolocation routing, geoproximity routing, latency-based routing, and weighted routing to route their end users to optimal endpoints. With the addition of IP-based routing, customers are now additionally empowered to fine-tune their DNS routing approach based on the Classless Inter-Domain Routing (CIDR) block that the query-originating IP address belongs to, allowing them to leverage knowledge of their end user base to optimize performance or network transit costs. View the full article
-
A quick overview of Azure DNS We offer two types of Azure DNS Zones—private and public—for hosting your private DNS and public DNS records. Azure Private DNS: Azure Private DNS provides a reliable and secure DNS service for your virtual network. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. By using private DNS zones, you can use your own custom domain name instead of the Azure-provided names during deployment. Azure Public DNS: DNS domains in Azure DNS are hosted on Azure's global network of DNS name servers. Azure DNS uses anycast networking. Each DNS query is answered by the closest available DNS server to provide fast performance and high availability for your domain. More information on additional services part of the Azure DNS offering can be found in the Azure DNS product page. View the full article
-
Amazon FSx for Windows File Server, a fully managed service that provides shared file storage built on Windows Server, today announced that you can now access file systems using any Domain Name System (DNS) name of your choosing. Each Amazon FSx file system has a default DNS name for accessing it. Starting today, you can now also associate alternate DNS names for accessing your file systems. View the full article
-
Sometimes when you try to ping a website, update a system or perform any task that requires an active internet connection, you may get the error message ‘temporary failure in name resolution’ on your The post How to Resolve "Temporary failure in name resolution" Issue first appeared on Tecmint: Linux Howtos, Tutorials & Guides. View the full article
-
Forum Statistics
63.6k
Total Topics61.7k
Total Posts