Jump to content

Search the Community

Showing results for tags 'azure'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. We've just added the new service status forums for Amazon Web Services (AWS) & Google Cloud Platform (GCP) https://devopsforum.uk/forum/32-aws-service-status/ https://devopsforum.uk/forum/33-gcp-service-status/ We've also added a new 'block' on the right-hand side of the main homepage, showing the latest 3 statuses 'Cloud Service Status' We've also added an Azure service status forum, however so far there are no posts - apparently If everything is running smoothly on Azure the feed will be empty https://azure.status.microsoft/en-gb/status/feed/ Here are some other status feeds for Azure services; https://azure.status.microsoft/en-us/status/history/ https://feeder.co/discover/d3ca207d93/azure-microsoft-com-en-us-status
  2. Azure Container Apps Azure container apps is a fully managed Kubernetes service that could be compared to ECS in AWS or Cloud Run in GCP. Compared to AKS, all integrations with Azure are already done for you. The best example is the use of managed identity where here you only need to enable a parameter whereas in AKS it’s complicated and changes every two years. View the full article
  3. Enhance your data profession by learning Azure, a high-demand skill and expertise. View the full article
  4. Looking for a new job? You can learn to be an Azure administrator and start an exciting new tech career. Now at $28 with coupon code SECURE20 through April 7th.View the full article
  5. Ubuntu 23.10 experimental image with x86-64-v3 instruction set now available on Azure Canonical is enabling enterprises to evaluate the performance of their most critical workloads in an experimental Ubuntu image on Azure compiled with x86-64-v3, which is a microarchitecture level that has the potential for performance gains. Developers can use this image to characterise workloads, which can help inform planning for a transition to x86-64-v3 and provide valuable input to the community working to make widespread adoption of x86-64-v3 a reality. The x86-64-v3 instruction set enables hardware features that have been added by chip vendors since the original instruction set architecture (ISA) commonly known as x86-64-v1, x86-64, or amd64. Canonical Staff Engineer Michael Hudson-Doyle recently wrote about the history of the x86-64/amd64 instruction sets, what these v1 and v3 microarchitecture levels represent, and how Canonical is evaluating their performance. While fully backwards compatible, later versions of these feature groups are not available on all hardware, so when deciding on an ISA image you must choose to maximise the supported hardware or to get access to more recent hardware capabilities. Canonical plans to continue supporting x86-64-v1 as there is a significant amount of legacy hardware deployed in the field. However, we also want to enable users to take advantage of newer x86-64-v3 hardware features that provide the opportunity for performance improvements the industry isn’t yet capitalising on. Untapped performance and power benefits Intel and Canonical partner closely to ensure that Ubuntu takes full advantage of the advanced hardware features Intel silicon offers, and the Ubuntu image on Azure is an interim step towards giving the industry access to the capabilities of x86-64-v3 and understanding the benefits that it offers. Intel has made x86-64-v3 available since Intel Haswell was first announced a decade ago. Support in their low power processor family is more recent, arriving in the Gracemont microarchitecture which was first in the 12th generation of Intel Core processors. Similarly, AMD has had examples since 2015, and emulators such as QEMU have supported x86-64-v3 since 2022. Yet, with this broad base of hardware availability, distro support of the features in the x86-64-v3 microarchitecture level is not widespread. In the spirit of enabling Ubuntu everywhere and ensuring that users can benefit from the unique features on different hardware families, Canonical feels strongly about enabling a transition to x86-64-v3 while remaining committed to our many users on hardware that doesn’t support v3. x86-64-v3 is available in a significant amount of hardware, and provides the opportunity for performance improvements which are currently being left on the table. This is why we believe that v3 is the next logical microarchitecture level to offer in Ubuntu, and Michael’s blog post explains in greater detail why v3 should be chosen instead of v2 or v4. Not just a porting exercise The challenge with enabling the transition to v3 is that while we expect a broad range of performance improvements depending on the workload, the results are much more nuanced. From Canonical’s early benchmarking we see that certain workloads see significant benefit from the adoption of x86-64-v3; however there are outliers that regress and need further analysis. Canonical continues to do benchmarking, with plans to evaluate different compilers, compiler parameters, and configurations of hostOS and guestOS. In certain cases, such as the Glibc Log2 benchmark, we have reproducibly seen up to a 60% improvement. On the other hand, we also see other benchmarks that regress significantly. When digging in, we found unexpected behaviour in the compiled code. For example, in one of the benchmarks we verified an excessive number of moves between registers, leading to much worse performance due to the increased latency. In another situation, we noticed a large code size increase, as enabling x86-64-v3 on optimised SSE code caused the compiler to expand it into 17x more instructions, due to a possible bug during the translation to VEX encoding. With community efforts, these outliers could be resolved. However, they will require interdisciplinary collaboration to do so. This also underscores the necessity of benchmarking different types of workloads, so that we can understand their specific performance and bottlenecks. That’s why we believe it’s important to enable workloads to run on Azure, so that a broader community can give feedback and enable further optimisation. Try Ubuntu 23.10 with x86-64-v3 on Azure today The community now has access to resources on Azure to easily evaluate the performance of x86-64-v3 for their workloads, so that they can understand the benefits of migrating and can identify where improvements are still required. What is being shared today is experimental and for evaluation and benchmarking purposes only, which means that it won’t receive security updates or other maintenance updates you would expect for an image you could use in production. When x86-64-v3 is introduced for production workloads there will be a benefit to being able to run both v3 and v1 depending on the workload and hardware available. As is usually the case, the answer to the question of whether to run on a v3 image or a v1 image is ‘it depends’. This image provides the tools to answer that cost, power, and performance optimisation problem. In addition to the availability of the cloud image on Azure, we’ve also previously posted on the availability of Ubuntu 23.04 rebuilt to target the x86-64-v3 microarchitecture level, and made available installer images from that archive. These are additional tools that the community can use to benchmark, when cloud environments can’t be targeted. In order to access the image on Azure and use it, you can follow the instructions in our discourse post. Please be sure to leave your feedback there, or Contact us directly to discuss your use case. Further reading Optimising Ubuntu performance on amd64 architecture Trying out Ubuntu 23.04 on x86-64-v3 rebuild for yourself View the full article
  6. London, 20 March 2024. Canonical has announced that Ubuntu Core, its operating system optimised for the Internet of Things (IoT) and edge, has received Microsoft Azure IoT Edge Tier 1 supported platform status from Microsoft. This collaboration brings computation, storage, and artificial intelligence (AI) capabilities in the cloud closer to the edge of the network. The power of the cloud on the edge Azure IoT Edge enables businesses to remotely and securely deploy and manage cloud-native workloads directly on their IoT devices, at scale, and with robust observability. With the ability to deploy and manage containerised applications on devices, teams can process data, run machine learning models, perform analytics, and carry out other tasks right at the edge of the network. This approach helps reduce latency, conserve bandwidth, and it provides more immediate insights from data near to where it is generated. It is especially useful in scenarios where real-time decision-making is crucial, where network connectivity might be unreliable, or where data privacy and security concerns demand local data processing. The security of Ubuntu Core Ubuntu Core is an operating system designed specifically for the IoT and embedded devices. Its range of features make it ideal for secure, reliable, and scalable deployments. Built on the power of Snaps, Ubuntu Core provides a minimal core with support for multiple architectures and types of devices. Security is baked-in with secure boot and full disk encryption, and over-the-air (OTA) transactional updates to ensure that devices are always up to date. Coupled with Canonical’s Long Term Support, which offers up to 10 years of maintenance and security updates, Ubuntu Core provides long-term peace of mind for IoT implementations. With the introduction of the Azure IoT Edge Snaps suite, the process of deploying edge workloads to the extensive array of devices and architectures supported by Ubuntu Core has become a streamlined, seamless, experience. Combined with the ability to remotely manage and configure both the processing and system components of fleets of devices directly from Azure, teams benefit from robust security and optimised performance. “With Microsoft committing their support for Ubuntu Core with the release of the Microsoft Azure IoT Edge Snaps we see another example of the industry’s enthusiasm to adopt the operating system to fulfil all of their IoT needs. We look forward to growing this relationship further with Microsoft in the future”. – Michael Croft-White, Engineering Director. “In collaboration with Canonical, we are making it simpler to reliably connect devices to Microsoft Azure IoT services. Snap support in Azure IoT Edge helps ensure consistent performance, enhanced security, and efficient updates across Linux distributions that support Snaps.” Kam VedBrat, GM, Azure IoT Further reading More information on Ubuntu Core can be found at ubuntu.com/core. Our “Intro to Ubuntu Core 22” webinar is a comprehensive resource for everything you need to know about Ubuntu Core. If you are not already familiar with Microsoft’s Azure IoT Edge, more information can be found here. Are you interested in running Ubuntu Core with Azure IoT on your devices and are working on a commercial project? Get in touch About Canonical Canonical, the publisher of Ubuntu, provides open-source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. View the full article
  7. The Exam AZ-204: Developing Solutions for Microsoft Azure is an essential step toward earning the Microsoft Certified: Azure Developer Associate certification. This certification demonstrates proficiency in all phases of development, from requirements gathering and design to deployment, security, maintenance, performance tuning, and monitoring within the Azure environment​​. To pass the AZ-204 exam, you’ll need to be adept at developing Azure compute solutions, working with Azure storage, implementing Azure security, and monitoring, troubleshooting, and optimizing Azure solutions. It also covers connecting to and consuming Azure services and third-party services. As of the latest update on January 22, 2024, it’s important to review the study guide for the most current skills measured​​​​. Candidates should have a solid foundation in programming in an Azure-supported language and proficiency using Azure CLI, Azure PowerShell, and other tools. It’s also beneficial to have at least two years of professional development experience, including hands-on experience with Azure​​​​. For those preparing for the AZ-204 exam, there are various resources available. Microsoft Learn offers self-paced learning paths, while Coursera provides a Professional Certificate program that covers not only Azure fundamentals but also more advanced topics like cloud security, data security, and cloud management​​. Pluralsight also offers a comprehensive learning path tailored to the exam, covering key topics such as Azure Compute Solutions, Azure Storage, and Azure Security, to name a few​​. Practical experience and hands-on practice are highly recommended to reinforce learning and ensure readiness for the exam. Consider utilizing practice tests and training courses offered by Coursera, Pluralsight, or Microsoft’s own resources to fill any gaps in knowledge and to get accustomed to the exam format​​​​. Remember, this certification is not just about knowing Azure but also applying that knowledge to solve real-world challenges effectively. It’s a valuable certification for anyone looking to validate their Azure development skills and knowledge​​. Section 1 Section 1, Introduction, Organizing Your Kitchen, AZ-204_ Developing Solutions for Microsoft Azure PDF, https://acloudguru-content-attachment-production.s3-accelerate.amazonaws.com/1686811151019-1242%20-%20AZ-204_%20Developing%20Solutions%20for%20Microsoft%20Azure.pdf Section 2 Section 2, Sous Vide or Sauté: Develop Azure Compute Solutions, Considering Compute Options, Napa Cabbage Crunch Salad Recipe, https://acloudguru-content-attachment-production.s3-accelerate.amazonaws.com/1686269979172-Napa%20Cabbage%20Crunch%20Salad.pdf Section 3 Section 3, Practical Pantries: Develop for Azure Storage, Configuring Azure Blob Storage, Mom's Volcano Cookies, https://acloudguru-content-attachment-production.s3-accelerate.amazonaws.com/1686270042003-MomsVolcanoCookies.pdf Section 4 Section 4, Too Many Cooks: Implement Azure Security, Understanding Authentication and Authorization Options, Eggs in Hot Water, https://acloudguru-content-attachment-production.s3-accelerate.amazonaws.com/1686270094026-EggsInHotWater.pdf Section 4, Too Many Cooks: Implement Azure Security, Using Managed Identities, Comparison Chart of Two Types of Managed Identities, https://acloudguru-content-attachment-production.s3-accelerate.amazonaws.com/1684945970439-1242_S04_L05_UsingManagedIdentities.png Section 5 Section 5, Limiting Leftovers: Optimize and Monitor, Leveraging Application Insights, Baked Mac and Cheese, https://acloudguru-content-attachment-production.s3-accelerate.amazonaws.com/1686270167023-BakedMacAndCheese.pdf Section 6 Section 6, Gourmet Delivery: Work with Azure and Third-Party Services, Connecting with Third-Party Services, Plate Like a Pro, https://acloudguru-content-attachment-production.s3-accelerate.amazonaws.com/1686270223216-PlateLikeAPro.pdf Section 7 Section 7, Practice Exam, Preparing for the AZ-204 Exam, Study Guide: Mapping of Microsoft Skills Measured to Course Assets, https://acloudguru-content-attachment-production.s3-accelerate.amazonaws.com/1685461440774-1242_AZ-204%20Vendor-Course%20Objective%20Mapping_ForStudyGuide.pdf Section 8 Section 8, Conclusion, Course Summary, Course Summary Slides, https://acloudguru-content-attachment-production.s3-accelerate.amazonaws.com/1684758007817-1242_S08_L01_CourseSummaryForPDF.pdf Section 8, Conclusion, Course Summary, Study Guide: Mapping of Microsoft Skills Measured to Course Assets, https://acloudguru-content-attachment-production.s3-accelerate.amazonaws.com/1685461489590-1242_AZ-204%20Vendor-Course%20Objective%20Mapping_ForStudyGuide.pdf The post Microsoft Certified: Azure Developer Associate – Exam AZ-204: Developing Solutions for Microsoft Azure appeared first on DevOpsSchool.com. View the full article
  8. By leveraging the wide array of public images available on Docker Hub, developers can accelerate development workflows, enhance productivity, and, ultimately, ship scalable applications that run like clockwork. When building with public content, acknowledging the potential operational risks associated with using that content without proper authentication is crucial. In this post, we will describe best practices for mitigating these risks and ensuring the security and reliability of your containers. Import public content locally There are several advantages to importing public content locally. Doing so improves the availability and reliability of your public content pipeline and protects you from failed CI builds. By importing your public content, you can easily validate, verify, and deploy images to help run your business more reliably. For more information on this best practice, check out the Open Container Initiative’s guide on Consuming Public Content. Configure Artifact Cache to consume public content Another best practice is to configure Artifact Cache to consume public content. Azure Container Registry’s (ACR) Artifact Cache feature allows you to cache your container artifacts in your own Azure Container Registry, even for private networks. This approach limits the impact of rate limits and dramatically increases pull reliability when combined with geo-replicated ACR, allowing you to pull artifacts from the region closest to your Azure resource. Additionally, ACR offers various security features, such as private networks, firewall configuration, service principals, and more, which can help you secure your container workloads. For complete information on using public content with ACR Artifact Cache, refer to the Artifact Cache technical documentation. Authenticate pulls with public registries We recommend authenticating your pull requests to Docker Hub using subscription credentials. Docker Hub offers developers the ability to authenticate when building with public library content. Authenticated users also have access to pull content directly from private repositories. For more information, visit the Docker subscriptions page. Microsoft Artifact Cache also supports authenticating with other public registries, providing an additional layer of security for your container workloads. Following these best practices when using public content from Docker Hub can help mitigate security and reliability risks in your development and operational cycles. By importing public content locally, configuring Artifact Cache, and setting up preferred authentication methods, you can ensure your container workloads are secure and reliable. Learn more about securing containers Try Docker Scout to assess your images for security risks. Looking to get up and running? Use our Quickstart guide. Have questions? The Docker community is here to help. Subscribe to the Docker Newsletter to stay updated with Docker news and announcements. Additional resources for improving container security for Microsoft and Docker customers Visit Microsoft Learn. Read the introduction to Microsoft’s framework for securing containers. Learn how to manage public content with Azure Container Registry. View the full article
  9. Companies worldwide are committed to reducing their IT carbon footprint, championing a more sustainable future through initiatives focused on efficiency and cost optimization. Cloud sustainability is not only about reducing the environmental impact of cloud usage, but also about making smart business decisions that align to corporate values, adhere to regulatory requirements, and enable the pursuit of long-term business goals. To understand the impact of cloud computing on carbon emissions, precise measurement, trustworthy data, and robust tools are essential. That’s why we’re excited to announce two new capabilities to optimize your Microsoft Azure emissions: Azure Carbon Optimization (preview) is a free, cutting-edge capability that empowers Azure developers and IT professionals to understand and optimize emissions stemming from Azure usage. By providing insights into carbon emissions and offering recommendations for enhancing cloud efficiency, this tool aligns with the Microsoft commitment to environmental responsibility and supports you in achieving your cloud sustainability goals. Microsoft Azure emissions insights (preview) in sustainability data solutions in Microsoft Fabric enables you to unify and analyze emissions data for Azure usage. By having access to your Azure emissions data in Microsoft Fabric, you can query and drill down into Azure resource level emissions for advanced reporting and analysis.  Both tools offer a holistic solution for organizations aiming to reduce their carbon footprint by optimizing specific resources or workloads within Azure. With Azure Carbon Optimization (preview), engineering and IT teams can use ready-to-consume insights and recommendations for optimizing their carbon emissions, all within the Azure portal. Microsoft Azure emissions insights (preview) enable data analysts and engineers to dive deeper into emissions data, allowing them to slice and dice the data and perform deeper analytics using Microsoft Fabric. Once your organization can access insights into the carbon emissions generated at the resource or workload level, reduction efforts can begin. This involves optimizing cloud systems for efficiency to benefit the environment and enhance overall performance. Azure administrators can already see a company-wide view of cloud emissions in the Emissions Impact Dashboard. To optimize your carbon footprint, you can take advantage of more granular insights into carbon emissions originating from specific resources or workloads. Like any major organizational shift, reducing carbon emissions requires contributions from every corner of your company. In this blog, we will not only explore the benefits of Azure Carbon Optimization and Microsoft Azure emissions insights, but also how the FinOps framework can guide your business through the complexities of carbon emission reduction to help achieve both your environmental and financial goals. Align IT sustainability with ESG regulations Organizations around the world are setting carbon neutrality goals for themselves, which are furthered by new environmental regulations and standards introduced by global government and regulatory bodies, with a significant driver being environmental, social, and governance (ESG) regulations. These governmental standards dictate ESG-related actions, reporting, and disclosures. Microsoft provides offerings to help customers with their ESG reporting needs with tools and products available with Microsoft Cloud for Sustainability to help your organization collect and manage more ESG data and get fuller visibility into your environmental impact. Our goal is to help prepare you for any new reporting requirements by compiling a comprehensive ESG data estate. IT sustainability plays a pivotal role in a company’s ESG management strategy because it serves as a cornerstone for mitigating environmental impact, ensuring responsible cloud usage, and reinforcing the overall commitment to sustainable development practices. There are also direct economic benefits for reducing carbon emissions, such as long-term operational cost savings. Above all, organizations that proactively address environmental issues and reduce their carbon footprint will be better positioned for long-term success, especially in a business landscape where sustainability is increasingly important. Measure and reduce your emissions with Azure Carbon Optimization Our free Azure Carbon Optimization tool, now in public preview and accessible through your Azure portal, is a window into your cloud resources emissions, ultimately leading to recommendations on how to cut back. It empowers Azure users to closely monitor and optimize their carbon footprint. Azure Carbon Optimization is designed to provide everyone in your organization, from developers, to architects, to IT professionals, with a resource-level view of emissions data. This empowers your engineers to take proactive measures to mitigate emissions and track progress right from the Azure portal. Azure Carbon Optimization uses the same carbon accounting methodology as the Emissions Impact Dashboard. Developers can work towards maximizing resource utilization while minimizing carbon emissions from the cloud, helping ensure that every deployed resource serves a purpose, eliminates waste, and reduces environmental impact. The tool also presents carbon emission reduction in equivalent terms that are easy for anyone to understand. Subsequently, it provides developers with carbon savings recommendations that are based on analyzing resource utilization. Suggestions include deleting or resizing underutilized resources. With these ready-to-consume recommendations, you can optimize your Azure usage, avoid carbon emissions, and promote sustainable development practices. This way, you not only enhance your environmental performance, but also achieve cost savings and efficiency. Perform even deeper Azure emissions analysis with Microsoft Fabric Microsoft Azure emissions insights, now in public preview, is a part of the sustainability data solutions in Microsoft Fabric. It helps unify, process, query, and perform deeper analysis of Azure emissions data. In addition to emissions data and related pipelines, Power BI dashboards are provided with Microsoft Azure emissions insights to drill-down and compare emissions data across subscriptions and resources. This helps IT administrators identify patterns in Azure emissions that evolve with time and change with Azure resource usage. Unified Azure emissions data empowers data analysts to enrich the emissions data with custom information such as department using subscriptions and resources. They can then query the data and build analytic models for interesting insights such as Azure emissions by departments and seasonality of emissions by usage. Leverage FinOps best practices to help optimize carbon emissions Fostering a culture of accountability, efficiency, and governance across an organization stands as a key objective within the FinOps framework, which aims to help organizations optimize their cloud to maximize business value. Efficiency has a positive impact on innovation by freeing up resources and allowing organizations to invest more in modernization, research, and development. FinOps supports the customer journey by establishing a cross-functional team that includes finance, IT, engineers, and business leaders to create a culture of accountability where everyone takes ownership of their cloud usage. As ESG regulations compel adherence to complex emissions reporting requirements, integrating FinOps best practices can help teams to better manage and optimize carbon emissions. When viewed through the lens of environmental awareness, FinOps can assist with best practices that foster accountability, efficiency, and governance to enable data-driven decisions. Leveraging these best practices in tandem with Azure Carbon Optimization and Microsoft Azure emissions insights empowers your organization to be a catalyst for change, transforming cloud practices into a force for sustainability by helping track, analyze, and optimize emissions towards a greener, more responsible cloud ecosystem. Reach your sustainability goals with data-driven Azure insights By employing these capabilities and adhering to FinOps practices, your organization can actively track, assess, and mitigate your carbon emissions. You’ll not only gain a detailed understanding of the emissions impact associated with your Azure resources, but also valuable insight into your compliance posture for any coming ESG regulations. Next steps Visit the Azure Carbon Optimization documentation and our new learning collection to discover more about how to start leveraging the data-driven insights provided by Azure Carbon Optimization for a more environmentally responsible and efficient operation. Continue your sustainability journey with the Azure Well-Architected Framework sustainability guidance and explore Sustainability outcomes and benefits for business through the Cloud Adoption Framework. This guidance provides insights into end-to-end sustainability considerations in your cloud estate. Visit the documentation for Microsoft Azure emissions insights and this new blog to learn more about deploying it in your Fabric environment and get started with centralizing and analyzing your Azure emissions data. This capability can be leveraged to analyze the trends of your Azure emissions over time by subscriptions and resources. For more on how FinOps best practices can help you maximize your cloud business value while addressing the complexities of carbon emission reduction, explore Microsoft’s resources for FinOps: Assess your organization’s gaps using the Microsoft FinOps Review Assessment. Gain hands-on experience with Microsoft solutions that empower FinOps through the Microsoft FinOps Interactive Guides. Explore a range of related Microsoft products and services on the FinOps on Azure homepage. Visit the Azure Carbon Optimization documentation Start leveraging data-driven insights and reduce your emissions today Learn more The post Achieving sustainable growth with Azure and FinOps best practices appeared first on Microsoft Azure Blog. View the full article
  10. In Cloudera deployments on public cloud, one of the key configuration elements is the DNS. Get it wrong and your deployment may become wholly unusable with users unable to access and use the Cloudera data services. If the DNS is set up less ideal than it could be, connectivity and performance issues may arise. In this blog, we’ll take you through our tried and tested best practices for setting up your DNS for use with Cloudera on Azure. To get started and give you a feel for the dependencies for the DNS, in an Azure deployment for Cloudera, these are the Azure managed services being used: AKS cluster: data warehouse, data engineering, machine learning, and Data flow MySQL database: data engineering Storage account: all services Azure database for PostgreSQL DB: data lake and data hub clusters Key vault: all services Typical customer governance restrictions and the impact Most Azure users use private networks with a firewall as egress control. Most users have restrictions on firewalls for wildcard rules. Cloudera resources are created on the fly, which means wildcard rules may be declined by the security team. Most Azure users use hub-spoke network topology. DNS servers are usually deployed in the hub virtual network or an on-prem data center instead of in the Cloudera VNET. That means if DNS is not configured correctly, the deployment will fail. Most Cloudera customers deploying on Azure allow the use of service endpoints; there is a smaller set of organizations that do not allow the use of service endpoints. Service endpoint is a simpler implementation to allow resources on a private network to access managed services on Azure Cloud. If service endpoints are not allowed, firewall and private endpoints will be the other two options. Most cloud users do not like opening firewall rules because that will introduce the risk of exposing private data on the internet. That leaves private endpoints the only option, which will also introduce additional DNS configuration for the private endpoints. Connectivity from private network to Azure managed services Firewall to Internet Route from firewall to Azure managed service endpoint on the internet directly. Service endpoint Azure provides service endpoints for resources on private networks to access the managed services on the internet without going through the firewall. That can be configured at a subnet level. Since Cloudera resources are deployed in different subnets, this configuration must be enabled on all subnets. The DNS records of the managed services using service endpoints will be on the internet and managed by Microsoft. The IP address of this service will be a public IP, and routable from the subnet. Please refer to the Microsoft documentation for detail. Not all managed services support services endpoint. In a Cloudera deployment scenario, only storage accounts, PostgreSQL DB, and Key Vault support service endpoints. Fortunately, most users allow service endpoints. If a customer doesn’t allow service endpoints, they have to go with a private endpoint, which is similar to what needs to be configured in the following content. Private Endpoint There is a network interface with a private IP address created with a private endpoint, and there is a private link service associated with a specific network interface, so that other resources in the private network can access this service through the private network IP address. The key here is for the private resources to find a DNS resolve for that private IP address. There are two options to store the DNS record: Azure managed public DNS zones will always be there, but they store different types of IP addresses for the private endpoint. For example: Storage account private endpoint—the public DNS zone stores the public IP address of that service. AKS API server private endpoint—the public DNS zone stores the private IP of that service. Azure Private DNS zone: The DNS records will be synchronized to the Azure Default DNS of LINKED VNET. Private endpoint is eligible to all Azure managed services that are used in Cloudera deployments. As a consequence, for storage accounts, users either use service endpoints or private endpoints. Because the public DNS zone will always return a public IP, the private DNS zone becomes a mandatory configuration. For AKS, these two DNS alternatives are both suitable. The challenges of private DNS zones will be discussed next. Challenges of private DNS zone on Azure private network Important Assumptions As mentioned above for the typical scenario, most Azure users are using a hub-and-spoke network architecture, and deploy custom private DNS on hub VNET. The DNS records will be synchronized to Azure default DNS of linked VNET. Simple Architecture Use Cases One VNET scenario with private DNS zone: When a private endpoint is created, Cloudera on Azure will register the private endpoint to the private DNS zone. The DNS record will be synchronized to Azure Default DNS of linked VNET. If users use custom private DNS, they can configure conditional forward to Azure Default DNS for the domain suffix of the FQDN. Hub-and-spoke VNET with Azure default DNS: With hub-spoke VNET and Azure default DNS, that is still acceptable. The only problem is that the resources on the un-linked VNET will not be able to access the AKS. But since AKS is used by Cloudera, that does not pose any major issues. The Challenge Part The most popular network architecture among Azure consumers is hub-spoke network with custom private DNS servers deployed either on hub-VNET or on-premises network. Since DNS records are not synchronized to the Azure Default DNS of the hub VNET, the custom private DNS server cannot find the DNS record for the private endpoint. And because the Cloudera VNET is using the custom private DNS server on hub VNET, the Cloudera resources on Cloudera VNET will go to a custom private DNS server for DNS resolution of the FQDN of the private endpoint. The provisioning will fail. With the DNS server deployed in the on-prem network, there isn’t Azure default DNS associated with the on-prem network, so the DNS server couldn’t find the DNS record of the FQDN of the private endpoint. Configuration best practices Against the background Option 1: Disable Private DNS Zone Use Azure managed public DNS zone instead of a private DNS zone. For data warehouse: create data warehouses through the Cloudera command line interface with the parameter “privateDNSZoneAKS”: set to”None.” For Liftie-based data services: the entitlement “LIFTIE_AKS_DISABLE_PRIVATE_DNS_ZONE” must be set. Customers can request this entitlement to be set either through a JIRA ticket or have their Cloudera solution engineer to make the request on their behalf. The sole drawback of this option is that it does not apply to data engineering, since that data service will create and use a MySQL private DNS zone on the fly. There is at present no option to disable private DNS zones for data engineering. Option 2: Pre-create Private DNS Zones Pre-create private DNS zones and link both Cloudera and hub VNETs to them. The advantage of this approach is that both data warehouse and Liftie-based data services support pre-created private DNS zones. There are however also a few drawbacks: For Liftie, the private DNS zone needs to be configured when registering the environment. Once past the environment registration stage, it cannot be configured. DE will need a private DNS zone for MySQL and it doesn’t support pre-configured private DNS zones. On-premises networks can’t be linked to a private DNS zone. If the DNS server is on an on-prem network, there are no workable solutions. Option 3: Create DNS Server as a Forwarder. Create a couple of DNS servers (for HA consideration) with load balancer in Cloudera VNET, and configure conditional forward to Azure Default DNS of the Cloudera VNET. Configure conditional forward from the company custom private DNS server to the DNS server in the Cloudera subnet. The drawback of this option is that additional DNS servers are required, which leads to additional administration overhead for the DNS team. Option 4: Azure-Managed DNS Resolve Create a dedicated /28 subnet in Cloudera VNET for Azure private DNS resolver inbound endpoint. Configure conditional forward from custom private DNS to the Azure private DNS resolver inbound endpoint. Summary Bringing all things together, consider these best practices for setting up your DNS with Cloudera on Azure: For the storage account, key vault, postgres DB Use service endpoints as the first choice. If service endpoint is not allowed, pre-create private DNS zones and link to the VNET where the DNS server is deployed. Configure conditional forwards from custom private DNS to Azure default DNS. If the custom private DNS is deployed in the on-premises network, use Azure DNS resolver or another DNS server as DNS forwarder on the Cloudera VNET. Conditional forward the DNS lookup from the private DNS to the resolver endpoint. For the data warehouse, DataFlow, or machine learning data services Disable the private DNS zone and use the public DNS zone instead. For the data engineering data service Configure the Azure DNS resolver or another DNS server as a DNS forwarder on the Cloudera VNET. Conditional forward the DNS lookup from the private DNS to the resolver endpoint. Please refer to Microsoft documentation for the details of setting up an Azure DNS Private Resolver. For more background reading on network and DNS specifics for Azure, have a look at our documentation for the various data services: DataFlow, Data Engineering, Data Warehouse, and Machine Learning. We’re also happy to discuss your specific needs; in that case please reach out to your Cloudera account manager or get in touch. The post DNS Zone Setup Best Practices on Azure appeared first on Cloudera Blog. View the full article
  11. Azure DevOps Server and Team Foundation Server follow the Microsoft Product Fixed Lifecycle Policy of 10 years. The first 5 years of Mainstream Support provide feature updates, platform updates, security updates, functionality fixes, and quality improvements. The second 5 years of Extended Support provide critical security updates only for the latest release of each version. Azure DevOps Server and Team Foundation Server are serviced through security or servicing patches that provide targeted cumulative bug fixes for existing features in the product. For the best and most secure product experience, we strongly encourage and recommend that all customers use the latest, most secure release of Azure DevOps Server. You can download the latest version of the product, Azure DevOps Server 2022.1 from the Azure DevOps Server download page. How to get updates We announce patches and new releases in the Azure DevOps blog. The release notes for each version provide details on the features and patches for that specific version. Supported versions Note: These versions are subject to the original product lifecycle extended end date as described in the Microsoft Product Fixed Lifecycle Policy. Server listing Supported version Azure DevOps Server 2022 Azure DevOps Server 2022.1 Azure DevOps Server 2020 Azure DevOps Server 2020.1.2 Azure DevOps Server 2019 Azure DevOps Server 2019.1.2 Team Foundation Server 2018 Team Foundation Server 2018.3.2 Team Foundation Server 2017 Team Foundation Server 2017.3.1 Team Foundation Server 2015 Team Foundation Server 2015.4.2 The post Azure DevOps Server Product Lifecycle and Servicing appeared first on Azure DevOps Blog. View the full article
  12. One of the most popular cloud-native, PaaS (Platform as a Service) products in Microsoft Azure is Azure App Service. It enables you to easily deploy and host web and API applications in Azure. The service supports ways to configure App Settings and Connection String within the Azure App Service instance. Depending on who has access […] The article Terraform: Deploy Azure App Service with Key Vault Secret Integration appeared first on Build5Nines. View the full article
  13. What is Azure PowerShell? Azure PowerShell is a set of cmdlets (command-lets) for managing Azure resources from the PowerShell command line. It provides a comprehensive and powerful toolset for managing Azure resources, including virtual machines, storage accounts, databases, and networking components. Azure PowerShell is widely used by IT professionals to automate tasks, manage complex deployments, and troubleshoot Azure issues. What is cmdlets? Cmdlets, pronounced “command-lets”, are the smallest unit of functionality in PowerShell. They are lightweight commands that are used in the PowerShell environment. Each cmdlet is a .NET Framework class that packages a specific set of functionality. Cmdlets follow a verb-noun naming pattern, such as Get-Help, Get-Process, and Start-Service, which makes them self-descriptive and easy to understand. They are designed to do one thing and do it well, with a consistent interface that makes it easy to chain together in scripts for more complex tasks. Cmdlets can be used to perform operations like managing system processes, reading and writing files, and manipulating data structures. Install Azure PowerShell on Windows 1. Run the following command from PowerShell to determine your PowerShell version: $PSVersionTable.PSVersion 2. Determine if you have the AzureRM PowerShell module installed Get-Module -Name AzureRM -ListAvailable 3. Update to Windows PowerShell 5.1 4. Install .NET Framework 4.7.2 or later 5. Set the PowerShell script execution to remote signed or less restrictive. Check the PowerShell execution policy: Get-ExecutionPolicy -List 6. Set the PowerShell execution policy to remote signed: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser 7. Copy and Paste the following command to install this package using PowerShellGet Install-Module -Name Az or Install-Module -Name Az -Repository PSGallery -Force 8. To update Update-Module -Name Az Install Azure PowerShell on Linux Open the Terminal or other shell host application and run pwsh to start PowerShell. Use the Install-Module cmdlet to install the Az PowerShell module: Install-Module -Name Az -Repository PSGallery -Force PowerShell Commands List Here are 25 basic PowerShell commands: Command nameAliasDescriptionSet-Locationcd, chdir, slSets the current working location to a specified location.Get-Contentcat, gc, typeGets the content of the item at the specified location.Add-ContentacAdds content to the specified items, such as adding words to a file.Set-ContentscWrites or replaces the content in an item with new content.Copy-Itemcopy, cp, cpiCopies an item from one location to another.Remove-Itemdel, erase, rd, ri, rm, rmdirDeletes the specified items.Move-Itemmi, move, mvMoves an item from one location to another.Set-ItemsiChanges the value of an item to the value specified in the command.New-ItemniCreates a new item.Start-JobsajbStarts a Windows PowerShell background job.Compare-Objectcompare, difCompares two sets of objects.Group-ObjectgroupGroups objects that contain the same value for specified properties.Invoke-WebRequestcurl, iwr, wgetGets content from a web page on the Internet.Measure-ObjectmeasureCalculates the numeric properties of objects, and the characters, words, and lines in string objects, such as files …Resolve-PathrvpaResolves the wildcard characters in a path, and displays the path contents.Resume-JobrujbRestarts a suspended jobSet-Variableset, svSets the value of a variable. Creates the variable if one with the requested name does not exist.Show-CommandshcmCreates Windows PowerShell commands in a graphical command window.Sort-ObjectsortSorts objects by property values.Start-ServicesasvStarts one or more stopped services.Start-Processsaps, startStarts one or more processes on the local computer.Suspend-JobsujbTemporarily stops workflow jobs.Wait-JobwjbSuppresses the command prompt until one or all of the Windows PowerShell background jobs running in the session are …Where-Object?, whereSelects objects from a collection based on their property values.Write-Outputecho, writeSends the specified objects to the next command in the pipeline. If the command is the last command in the pipeline,… Azure Powershell Commands and Cheat Sheet The post Azure PowerShell Tutorials: Installations and User Guide appeared first on DevOpsSchool.com. View the full article
  14. Publication of Ubuntu Server 20.04 LTS has been halted since the end of August 2023. Canonical has since confirmed an undocumented policy when publishing to the Microsoft Azure Partner Center: a maximum of 100 image versions can be published to a Plan for a given Marketplace Offer. The maximum was reached with the publication of image version 20.04.202308310, and collaboration with Microsoft has determined that the only solution to resume publication is to deprecate older image versions... View the full article
  15. If you are deploying your application to Azure from Azure Pipelines, you might want to leverage the ability to do so without using secrets, thanks to Workload identity federation. In this article, I will demonstrate how to automate the configuration of your Azure DevOps project, with everything pre-configured to securely deploy applications to Azure... View the full article
  16. The post Terraform: Create Azure Windows VM with file, remote-exec & local-exec provisioner appeared first on DevOpsSchool.com. View the full article
  17. In this post, I will discuss how to utilize Azure Key Vault (AKV) with Azure Red Hat OpenShift (ARO) cluster. I will explain the relevant terms and their definitions from the architectural standpoint and how the flow works at a glance, and I will give an example of how to deploy this in the ARO cluster. The objective of this article is to enable you to store and retrieve secrets stored in AKV from your ARO cluster. View the full article
  18. Backup is defined as the process of creating copies of data and storing them in separate locations or mediums, while restore is defined as the process of retrieving the backed-up data and returning it to its original location or system or to a new one. In other words, backup is akin to data preservation, and restore is in essence data retrieval. View the full article
  19. We’re excited to announce that GitHub Advanced Security for Azure DevOps is now generally available and is ready for you to use in your own Azure DevOps repos! You can now enable code, secret, and dependency scanning within Azure Repos and take advantage of the new product updates. Learn how to enable Advanced Security in your Azure Repos > Thanks to your great feedback, we were able to identify issues and deliver updates that address key improvements since our public preview. You wanted: Faster onboarding after registering for Advanced Security The ability to enable multiple repos simultaneously More upfront clarity in billing Better visibility into all enabled repo alerts through a single pane of glass View the full article
  20. HashiCorp and Microsoft have partnered to create Terraform modules that follow Microsoft's Azure Well-Architected Framework and best practices. In previous blog posts, we’ve demonstrated how to build a secure Azure reference architecture and deploy securely into Azure with HashiCorp Terraform and Vault, as well as how to manage post-deployment operations. This post looks at how HashiCorp and Microsoft have created building blocks that allow you to repeatedly, securely, and cost-effectively accelerate AI adoption on Azure with Terraform. Specifically, it covers how to do this by using Terraform to provision Azure OpenAI services... View the full article
  21. In today’s digital landscape, web application security is paramount. As businesses increasingly migrate their operations to the cloud, the importance of safeguarding web applications hosted on platforms like Microsoft Azure cannot be overstated. This article will delve deep into the top 10 web application security risks specific to the Microsoft Azure cloud environment. For each […] The article Top 10 Web Application Security Risks in Microsoft Azure and Ways to Mitigate Them appeared first on Build5Nines. View the full article
  22. We're excited to announce the version 2.0.0 release of the Packer Azure plugin, which enables users to build Azure virtual hard disks, managed images, and Compute Gallery (shared image gallery) images. The plugin is one of the most popular ways to build Azure Virtual Machine images and is used by Microsoft Azure via the Azure Image Builder For the past year, we have been tracking the changes to the Azure SDKs and keeping our eyes on the upcoming deprecations, which were sure to disrupt how Packer interacts with Azure. When we found that the version of the Azure SDK the Packer plugin was using would soon be deprecated we began work to migrate to the Terraform tested HashiCorp Go Azure SDK. The HashiCorp Go Azure SDK is generated from and based on the Azure API definitions to provide parity with the official Azure SDK — making it a near drop-in replacement for the Azure SDK, with the ability to resolve issues around auto-rest, polling, and API versioning. Version 2.0.0 of the Packer Azure plugin addresses the known deprecations with minimal disruption to the user, introduces new highly requested features, and combines the stability of the Packer Azure plugin with the Terraform Azure provider.. View the full article
  23. Just because everything worked when you provisioned your infrastructure, you can’t assume everything will continue to work properly after deployment. Continuous validation is a foundational feature for HashiCorp Terraform Cloud Plus that helps make sure infrastructure is working as expected. Use cases for continuous validation include closing security gaps, controlling budgets, dealing with certificate expiration, or even just knowing whether a virtual machine (VM) is up and running... View the full article
  24. HashiCorp Terraform is a great tool for deploying and managing Microsoft Azure resource. This includes management of Azure Storage Accounts and Blob Containers. Azure Storage is one of the primary, foundational PaaS (Platform as a Service) services in Microsoft Azure for storing files and other blobs (binary large objects) of data. This article will show […] The article Terraform: Deploy Azure Storage Account and Blob Container appeared first on Build5Nines. View the full article
  • Forum Statistics

    43.2k
    Total Topics
    42.5k
    Total Posts
×
×
  • Create New...