Search the Community
Showing results for tags 'hashicorp'.
-
IBM has announced its acquisition of HashiCorp Inc., a leading multi-cloud infrastructure automation company, for $6.4 billion! This acquisition is poised to revolutionize the hybrid cloud landscape, offering enterprises a comprehensive end-to-end solution to navigate the complexities of today’s AI-driven application growth. Let’s look at the details of this cloud changing acquisition and its implications […] The article Analyzing IBM’s Acquisition of HashiCorp: A Game-Changer in Hybrid Cloud Management appeared first on Build5Nines. View the full article
-
- hashicorp
- acquisitions
-
(and 1 more)
Tagged with:
-
IBM has confirmed it has entered into a definitive agreement with HashiCorp in which it will acquire the California-based software company for $35 per share, equating to a $6.4 billion deal. In its announcement, IBM said the acquisition of HashiCorp is designed to bolster its position in the hybrid cloud and AI markets. Rather than selling off its business to IBM and closing down, HashiCorp has confirmed that it will continue to build products and services, but operating as a division of IBM Software with the backing of a much bigger company. IBM and HashiCorp Speaking about the deal, IBM CEO Arvind Krishna said: “Enterprise clients are wrestling with an unprecedented expansion in infrastructure and applications across public and private clouds, as well as on-prem environments.” Krishna added: “HashiCorp has a proven track record of enabling clients to manage the complexity of today's infrastructure and application sprawl. Combining IBM's portfolio and expertise with HashiCorp's capabilities and talent will create a comprehensive hybrid cloud platform designed for the AI era.” IBM likes what it sees in the deal, revealing an “attractive financial opportunity” and the potential to expand the total addressable market. The deal also aligns with the company’s broader strategy. In IBM’s most recent earnings call, CFO Jim Kavanaugh revealed that around 70% of HashiCorp’s revenue currently comes from US companies, adding that only around one in five of the Forbes Global 2000 are HashiCorp customers, alluding to the potential for scaling the business under the IBM umbrella. In the week leading up to the announcement, HashiCorp shares rose by around 34.8%. IBM investors seemed less sure about the deal, with share prices seeing a less impressive 1.6% uptick. More from TechRadar Pro These are the best cloud hosting providers around right nowIBM Consulting is the latest to order an immediate return to officeStore your data off-prem with the best cloud storage and best cloud backup tools View the full article
-
Everyone knew HashiCorp was attempting to find a buyer. Few suspected it would be IBM. View the full article
-
Today we announced that HashiCorp has signed an agreement to be acquired by IBM to accelerate the multi-cloud automation journey we started almost 12 years ago. I’m hugely excited by this announcement and believe this is an opportunity to further the HashiCorp mission and to expand to a much broader audience with the support of IBM. When we started the company in 2012, the cloud landscape was very different than today. Mitchell and I were first exposed to public clouds as hobbyists, experimenting with startup ideas, and later as professional developers building mission-critical applications. That experience made it clear that automation was absolutely necessary for cloud infrastructure to be managed at scale. The transformative impact of the public cloud also made it clear that we would inevitably live in a multi-cloud world. Lastly, it was clear that adoption of this technology would be driven by our fellow practitioners who were reimagining the infrastructure landscape. We founded HashiCorp with a mission to enable cloud automation in a multi-cloud world for a community of practitioners. Today, I’m incredibly proud of everything that we have achieved together. Our products are downloaded hundreds of millions of times each year by our passionate community of users. Each year, we certify tens of thousands of new users on our products, who use our tools each and every day to manage their applications and infrastructure. We’ve partnered with thousands of customers, including hundreds of the largest organizations in the world, to power their journey to multi-cloud. They have trusted us with their mission-critical applications and core infrastructure. One of the most rewarding aspects of infrastructure is quietly underpinning incredible applications around the world. We are proud to enable millions of players to game together, deliver loyalty points for ordering coffee, connect self-driving cars, and secure trillions of dollars of transactions daily. This is why we’ve always believed that infrastructure enables innovation. The HashiCorp portfolio of products has grown significantly since we started the company. We’ve continued to work with our community and customers to identify their challenges in adopting multi-cloud infrastructure and transitioning to zero trust approaches to security. These challenges have in turn become opportunities for us to build new products and services on top of the HashiCorp Cloud Platform. This brings us to why I’m excited about today's announcement. We will continue to build products and services as HashiCorp, and will operate as a division inside IBM Software. By joining IBM, HashiCorp products can be made available to a much larger audience, enabling us to serve many more users and customers. For our customers and partners, this combination will enable us to go further than as a standalone company. The community around HashiCorp is what has enabled our success. We will continue to be deeply invested in the community of users and partners who work with HashiCorp today. Further, through the scale of the IBM and Red Hat communities, we plan to significantly broaden our reach and impact. While we are more than a decade into HashiCorp, we believe we are still in the early stages of cloud adoption. With IBM, we have the opportunity to help more customers get there faster, to accelerate our product innovation, and to continue to grow our practitioner community. I’m deeply appreciative of the support of our users, customers, employees, and partners. It has been an incredibly rewarding journey to build HashiCorp to this point, and I’m looking forward to this next chapter. Additional Information and Where to Find It HashiCorp, Inc. (“HashiCorp”), the members of HashiCorp’s board of directors and certain of HashiCorp’s executive officers are participants in the solicitation of proxies from stockholders in connection with the pending acquisition of HashiCorp (the “Transaction”). HashiCorp plans to file a proxy statement (the “Transaction Proxy Statement”) with the Securities and Exchange Commission (the “SEC”) in connection with the solicitation of proxies to approve the Transaction. David McJannet, Armon Dadgar, Susan St. Ledger, Todd Ford, David Henshall, Glenn Solomon and Sigal Zarmi, all of whom are members of HashiCorp’s board of directors, and Navam Welihinda, HashiCorp’s chief financial officer, are participants in HashiCorp’s solicitation. Information regarding such participants, including their direct or indirect interests, by security holdings or otherwise, will be included in the Transaction Proxy Statement and other relevant documents to be filed with the SEC in connection with the Transaction. Additional information about such participants is available under the captions “Board of Directors and Corporate Governance,” “Executive Officers” and “Security Ownership of Certain Beneficial Owners and Management” in HashiCorp’s definitive proxy statement in connection with its 2023 Annual Meeting of Stockholders (the “2023 Proxy Statement”), which was filed with the SEC on May 17, 2023 (and is available at https://www.sec.gov/ix?doc=/Archives/edgar/data/1720671/000114036123025250/ny20008192x1_def14a.htm). To the extent that holdings of HashiCorp’s securities have changed since the amounts printed in the 2023 Proxy Statement, such changes have been or will be reflected on Statements of Change in Ownership on Form 4 filed with the SEC (which are available at https://www.sec.gov/cgi-bin/browse-edgar?action=getcompany&CIK=0001720671&type=&dateb=&owner=only&count=40&search_text=). Information regarding HashiCorp’s transactions with related persons is set forth under the caption “Related Person Transactions” in the 2023 Proxy Statement. Certain illustrative information regarding the payments to that may be owed, and the circumstances in which they may be owed, to HashiCorp’s named executive officers in a change of control of HashiCorp is set forth under the caption “Executive Compensation—Potential Payments upon Termination or Change in Control” in the 2023 Proxy Statement. With respect to Ms. St. Ledger, certain of such illustrative information is contained in the Current Report on Form 8-K filed with the SEC on June 7, 2023 (and is available at https://www.sec.gov/ix?doc=/Archives/edgar/data/1720671/000162828023021270/hcp-20230607.htm). Promptly after filing the definitive Transaction Proxy Statement with the SEC, HashiCorp will mail the definitive Transaction Proxy Statement and a WHITE proxy card to each stockholder entitled to vote at the special meeting to consider the Transaction. STOCKHOLDERS ARE URGED TO READ THE TRANSACTION PROXY STATEMENT (INCLUDING ANY AMENDMENTS OR SUPPLEMENTS THERETO) AND ANY OTHER RELEVANT DOCUMENTS THAT HASHICORP WILL FILE WITH THE SEC WHEN THEY BECOME AVAILABLE BECAUSE THEY WILL CONTAIN IMPORTANT INFORMATION. Stockholders may obtain, free of charge, the preliminary and definitive versions of the Transaction Proxy Statement, any amendments or supplements thereto, and any other relevant documents filed by HashiCorp with the SEC in connection with the Transaction at the SEC’s website (http://www.sec.gov). Copies of HashiCorp’s definitive Transaction Proxy Statement, any amendments or supplements thereto, and any other relevant documents filed by HashiCorp with the SEC in connection with the Transaction will also be available, free of charge, at HashiCorp’s investor relations website (https://ir.hashicorp.com/), or by emailing HashiCorp’s investor relations department (ir@hashicorp.com). Forward-Looking Statements This communication may contain forward-looking statements that involve risks and uncertainties, including statements regarding (i) the Transaction; (ii) the expected timing of the closing of the Transaction; (iii) considerations taken into account in approving and entering into the Transaction; and (iv) expectations for HashiCorp following the closing of the Transaction. There can be no assurance that the Transaction will be consummated. Risks and uncertainties that could cause actual results to differ materially from those indicated in the forward-looking statements, in addition to those identified above, include: (i) the possibility that the conditions to the closing of the Transaction are not satisfied, including the risk that required approvals from HashiCorp’s stockholders for the Transaction or required regulatory approvals to consummate the Transaction are not obtained, on a timely basis or at all; (ii) the occurrence of any event, change or other circumstance that could give rise to a right to terminate the Transaction, including in circumstances requiring HashiCorp to pay a termination fee; (iii) possible disruption related to the Transaction to HashiCorp’s current plans, operations and business relationships, including through the loss of customers and employees; (iv) the amount of the costs, fees, expenses and other charges incurred by HashiCorp related to the Transaction; (v) the risk that HashiCorp’s stock price may fluctuate during the pendency of the Transaction and may decline if the Transaction is not completed; (vi) the diversion of HashiCorp management’s time and attention from ongoing business operations and opportunities; (vii) the response of competitors and other market participants to the Transaction; (viii) potential litigation relating to the Transaction; (ix) uncertainty as to timing of completion of the Transaction and the ability of each party to consummate the Transaction; and (x) other risks and uncertainties detailed in the periodic reports that HashiCorp files with the SEC, including HashiCorp’s Annual Report on Form 10-K. All forward-looking statements in this communication are based on information available to HashiCorp as of the date of this communication, and, except as required by law, HashiCorp does not assume any obligation to update the forward-looking statements provided to reflect events that occur or circumstances that exist after the date on which they were made.View the full article
-
Earlier this week, HashiCorp opened our new Madrid Tech Hub, marking a significant milestone in our commitment to grow our European presence and support companies with Infrastructure and Security Lifecycle Management and security lifecycle (SLM) management software that has become essential for enterprises around the world. The new Tech Hub is located at Paseo de La Castellana in Nuevos Ministerios, Madrid. Its charter extends beyond business operations, aiming to also make a positive impact on the local economy and community. And it’s well positioned to support the thriving community of HashiCorp User Groups (HUGs), with hundreds of members in the Madrid group. In order to build and support the new location, three HashiCorp employees were given the opportunity to be part of the Madrid Founders Program, a six-month assignment to support new employees and cultivate the HashiCorp community in Madrid. Here’s the inside story of the Madrid Tech Hub launch, from the people who know it best. Will Farley, Senior Manager, Solutions Engineering, EMEA Why did you apply to become a founding member of the Madrid Tech Hub? I love building things and developing talent. So the opportunity to get involved with this game-changing new initiative at HashiCorp was something I felt I needed to be a part of. The potential for the Tech Hub in the future really excites me. What are you most looking forward to? I have the opportunity to really make a difference and shape the success of the company's future. Moreover, I’m excited about the experience of working in a different country, embracing new cultures; and growing as an individual. What do you hope to take away from this experience? A deep understanding of the beginning-to-end setup of a global, multi-function hub. This will allow me to think more widely in the future when managing my teams. A bonus would be to come away with a bit more Spanish than just ‘Hola’! Joe Colandro, Senior Manager, Solutions Engineering, US Why did you apply to become a founding member of the Madrid Tech Hub? I was excited by the opportunity to be a part of something strategic that will have a massive impact on our entire organization. I love helping people get better and achieve their goals, and the Madrid founder role is an opportunity to do this for more people. What we are doing in Madrid hasn't been done at HashiCorp before and I am excited to enter uncharted waters. Lastly, it’s a chance to have some unique and extraordinary life experiences for myself and my family. How are you preparing for your Madrid stay? I will be moving to Madrid with my wife and our two young boys, aged 7 and 9. We’ve spent most of our time building a list of all of the soccer (football, fùtbol) we are going to see in Spain and around Europe. What does your role as a founding member of the Madrid Tech Hub include? Our role as founders is to ensure the smooth onboarding of the new employees joining HashiCorp at the Tech Hub. We have been meticulously crafting a shared onboarding experience for all of the roles in the Tech Hub over the last several months. We, the founders, will deliver some of that, but we’ll also facilitate subject matter experts from around the company to deliver the best experience possible. Having been at HashiCorp for a while now, it is our responsibility to enshrine the company principles, Tao, and team culture from the beginning. Julia Friedman, Senior Manager, Solutions Architecture Maturity Why did you apply to become a founding member of the Madrid Tech Hub? I love building new things, especially here at HashiCorp. I originally joined the company to help establish a new team, and some of my most enjoyable experiences here have been helping to launch new processes, functions, and teams. I really enjoy getting to examine all sides of a problem and getting to understand it, and building new orgs is a particularly enjoyable way for me to do that. I also enjoy working cross-functionally: it’s part of what my team does best, so getting to work alongside Solution Engineering, Customer Services, Value Engineering, and Services is particularly appealing to me. Finally, my wife and I always hoped to have the chance to live in Europe, so this is an incredible opportunity for us. Why is HashiCorp opening a Tech Hub in Madrid? The Tech Hub concept is a new one for HashiCorp, but it’s critical to our ability to scale while still being present with our customers. Having a group of experts who are able to manage our critical customer interactions at scale allows us to be as effective as possible in the key parts of our customers’ lifecycles. But why pick Madrid? We looked for a balance of a number of key factors: Madrid is the second largest city in the EU and has had a huge influx of talent from around the world (more than 20% of Madrid’s residents weren’t born in Spain). The Madrid region also has 19 universities and more than 325,000 university students. Weighing the data, we saw that Madrid was a fantastic option for our first Tech Hub. How are you preparing for your Madrid stay? It’s been exciting for my wife and I to discuss where we want to live and how we want to spend our time in Spain. We both love to travel, so we’re already planning a long list of weekend outings. We’re also making plans to host friends and family who want to visit us while we’re in Spain. One of our biggest tasks has been preparing paperwork for Kona, our dog, so that he can accompany us on this adventure. At work, I’ve been busy preparing to launch my new team and transitioning my old team to my successor. But I’ve also been spending time and energy building out content for onboarding our new hires and establishing new supporting processes for their function. It’s been a lot of shuffling and early morning meetings (I’m coming from US Pacific Time), but I’m excited about the launch. Want to be part of the HashiCorp Madrid Tech Hub? We're looking for passionate individuals to join the team at our new Madrid Tech Hub location as we expand our presence in this thriving city. Come be a part of our adventure as we build the future together. Join us, and let's grow with HashiCorp. See our available roles in Madrid at HashiCorp.com/careers. View the full article
-
Google Cloud's flagship cloud conference — Google Cloud Next — wrapped up April 11 and HashiCorp was fully engaged with demos, breakout sessions, presentations, and experts at our lively booth. This post shares announcements from the event and highlights recent developments in our partnership. HashiCorp and Google Cloud help organizations control cloud spend, improve their risk profile, and unblock developer productivity for faster time to market. The strength of our partnership can be seen in this recent milestone: The Google Cloud Terraform provider has now surpassed 600 million downloads. The sheer scale of that number demonstrates that HashiCorp technologies underpin a significant portion of Google Cloud services and provide developer-friendly ways to scale infrastructure and security. HashiCorp-Google Cloud developments on display at Google Cloud Next include: Partnership update: * HashiCorp joins the new Google Distributed Cloud partner program Product integrations: * Secrets sync with Google Cloud Secrets Manager * Terraform Google Cloud provider-defined functions * Consul fleet-wide availability on GKE Autopilot Presentations, demos, and webinars: * On the floor at Google Cloud Next * Scaling infrastructure as code with Terraform on Google Cloud Partnership update HashiCorp joins the new Google Distributed Cloud partner program At Google Cloud Next, Google announced that customers can now take advantage of an expanded marketplace of independent software vendors (ISVs) with the Google Cloud Ready — Distributed Cloud program. The new program works with partners to validate their solutions by tuning and enhancing existing integrations and features to better support customer use cases for GDC, which can help identify software solutions that are compatible with GDC more quickly. HashiCorp is part of a diverse group of software partners that have committed to validating their solutions in this program. Read Google’s blog post to learn more about the program. Product integrations Sync secrets with Google Cloud Secrets Manager Vault Enterprise secrets sync, now generally available in Vault Enterprise 1.16, is a new feature that helps organizations manage secrets sprawl by centralizing the governance and control of secrets that are stored within other secret managers. Secrets sync lets users manage multiple external secrets managers, which are called destinations in Vault. We’re proud to announce that, at the time of Vault 1.16’s launch, Google Cloud Secrets Manager is one of several supported destinations. Terraform Google Cloud provider-defined functions We have announced the general availability of provider-defined functions in the Google Cloud Terraform provider. This release represents yet another step in our unique approach to ecosystem extensibility. Provider-defined functions allow anyone in the Terraform community to build custom functions within providers and extend the capabilities of Terraform. You can find examples of provider-defined functions in the officially supported Google Cloud and Kubernetes providers at our blog on Terraform 1.8 adding provider functions. Consul fleet-wide availability on GKE Autopilot As more customers use multiple cloud services or microservices, they face the difficulty of consistently managing and connecting their services across various environments, including on-premises datacenters, multiple clouds, and existing legacy systems. HashiCorp Consul's service mesh addresses this challenge by securely and consistently connecting applications on any runtime, network, cloud platform, or on-premises setup. In the Google Cloud ecosystem, Consul can be deployed across Google Kubernetes Engine (GKE) and Anthos GKE. Now, Consul 1.16 is also supported on GKE Autopilot, Google Cloud’s fully managed Kubernetes platform for containerized workloads. Consul 1.17 is currently on track to be supported on GKE Autopilot later this year. You can learn more about the benefits of GKE Autopilot and how to deploy Consul on GKE Autopilot in our blog post on GKE Autopilot support for Consul. Presentations, demos, and webinars On the floor at Google Cloud Next HashiCorp held two speaking sessions at Google Cloud Next: Multi-region, multi-runtime, multi-project infrastructure as code and Scaling Infrastructure as Code: Proven Strategies and Productive Workflows. These sessions were recorded and will be posted on the Google Cloud Next homepage later in April. You can also join our upcoming webinar, which will cover many of the concepts from these talks (more on the webinar in a moment). Google Cloud Next also featured a generative AI demo where customers could discover more than 100 generative AI solutions from partners. HashiCorp was selected for the demo and presented an AI debugger for Terraform that resolves run issues to better identify and remediate developer infrastructure deployment challenges. To learn more, check out the Github repo and read the Google Cloud partner gen AI demo blog. Webinar: Scaling infrastructure as code with Terraform on Google Cloud Now that Google Next is over, it’s time to make plans to join me and HashiCorp Developer Advocate Cole Morrison in our upcoming Scaling Infrastructure as Code on Google Cloud webinar, on Thursday, May 2, at 9 a.m. PT. We’ll cover the proven strategies and approaches to creating scalable infrastructure as code with HashiCorp Terraform, showing how large organizations find success in structuring projects across teams, architect globally available systems, share sensitive information between environments securely, set up developer-friendly workflows, and more. You'll see it all in action with a live demo and codebase that deploys many services across multiple teams. View the full article
-
Hashicorp is accusing the open source OpenTofu Project of swiping some of its BSL-licensed Terraform code. Enter the lawyers.View the full article
-
opentofu Hashicorp Versus OpenTofu Gets Ugly
Security Boulevard posted a topic in Infrastructure-as-Code
Hashicorp is accusing the open source OpenTofu Project of swiping some of its BSL-licensed Terraform code. Enter the lawyers. The post Hashicorp Versus OpenTofu Gets Ugly appeared first on Security Boulevard. View the full article -
hashicorp HashiCorp spotlights Women’s History Month
Hashicorp posted a topic in Infrastructure-as-Code
As Women’s History Month draws to a close, we’re spotlighting members of the HashiCorp community to share the journeys and achievements of women working in tech. Throughout March, the Women of HashiCorp employee resource group (ERG) and the wider community have celebrated Women's History Month. This annual observance recognizes the pivotal contributions of women to historical and modern-day society with activities designed to engage, educate, and inspire. Now, as Women’s History Month comes to a close, we are spotlighting empowering stories and advice from a few of our employees: Kelly McCarthy, Solutions Engineer | Austin, Texas What advice do you have for women looking to excel in technology? Most importantly, ask questions. At first I was apprehensive about asking people questions because I did not want to come across as not knowing anything, but that can really be detrimental to your personal and professional development. Most people are willing to help and even if they do not know the answer, they will find another person or resource who does. What women in tech have inspired or influenced your career path? My first manager, Julie Seo. Her drive and ability to lead a successful sales team was something that I observed from Day 1, and this ignited my interest in tech sales. She is even one of the co-chairs of the Women of HashiCorp ERG! This showcases that she looks to not only invest her time and energy into making the business successful, but also lifting up people around her. Do you have any tips for maintaining work-life balance? Exploring new interests in the community around you can help create things to do and look forward to outside of work. Since moving to Austin to work for HashiCorp, I have joined a running club. It has helped me explore the city and have something to look forward to during the week. Jenny Evans, Director, Corporate Communications, EMEA | London What was your journey to HashiCorp? I fell into marketing and communications more than 15 years ago, as a single parent, and quickly realized that I could forge a diverse and enjoyable career that used my skills. I’ve always worked in engineering and tech businesses; I find learning about complex concepts and helping others understand them very rewarding. Plus it’s an industry that’s always going to be at the forefront of the future, which is hugely exciting. What advice do you have for women looking to excel in technology? Don’t dismiss yourself as not-technical. I catch myself saying self-deprecating comments and it’s just not true. Just because you haven’t trained as an engineer or in computer science doesn’t mean you can’t have an impact in tech. You know more than you think and there are many ways to be technical without writing code. What women in tech have inspired or influenced your career path? HashiCorp Field CTO Sarah Polan and our Vice President of Northern EMEA Lousie Fellows. I have the utmost admiration for their knowledge, work ethic, and willingness to support those around them. They are role models willing to share their experiences of the good, bad, and occasionally ugly side of being a woman in tech, and how to succeed. Diana Akiri, Sales Development Representative, AMER | Austin, Texas Do you have any tips for maintaining work-life balance? What worked for me was going into each work day with the intention to feel almost tired by the end of it. Being productive and doing all the work I can within working hours actually made me feel good about rewarding myself with the evening off and alleviated a lot of stress and guilt. The mentality that I had to get x amount of tasks done in x amount of time has allowed me to not only reach professional goals and become a top performer, but also to feel proud of myself for the results I produced. What was your journey to HashiCorp? I was a junior in college when I came across HashiCorp. I was very interested in doing the Sales Development Representative internship, so once I applied, interviewed, and got accepted, I was over the moon! I’ve since established lasting connections with my fellow interns and many others on various teams. The internship was not only an introduction to HashiCorp, but also to the tech industry. I was able to learn the lingo and understand what the current landscape looks like and how HashiCorp solutions fit in it. I was able to become one of the top performers in the internship so I got a return offer, started after I graduated college, and have been working on the sales team ever since! Celine Valentine, Solutions Engineer | Houston, Texas What was your journey to HashiCorp? My journey began as a high school STEM teacher, where I discovered my passion for technology while inspiring young minds. Transitioning into software engineering and eventually solutions engineering, I've embraced the dynamic tech field, relishing the opportunity to learn daily on the job. My focus lies in leveraging tech knowledge to empower customers, aiding them in navigating complex cloud infrastructure and security challenges. What advice do you have for women looking to excel in technology? Consistently seek feedback to improve and reflect regularly to identify areas for growth. Embrace a mindset of continuous learning, striving to become an expert in one domain before branching out to others. What challenges have you faced in your career and how did you overcome them? I've learned to execute decisions thoughtfully, overcoming challenges by incorporating multiple perspectives and seeking advice. By channeling emotions into empathy and utilizing outlets like physical activities and hobbies, I've navigated difficulties while celebrating each step forward. What words of encouragement would you like to share? Always seek support from family, coworkers, and mentors during challenging times, fostering a positive circle of influence. Approach difficult situations with careful consideration and execution, utilizing emotions for empathy while maintaining sensibility. Communicate with clarity for conflict resolution, providing constructive alternative solutions in a diplomatic manner. Find outlets for self-expression and positive coping mechanisms, celebrating small victories to propel yourself forward. Netra Mali, Software Engineer, Terraform | Toronto What was your journey to HashiCorp? A passion for technology and problem-solving led me to pursue a degree in computer science. Here, I gained valuable experience and skills interning at several tech companies, including HashiCorp, having the opportunity to work on cutting-edge projects and collaborate with talented individuals. I made meaningful contributions, and my team was impressed by my work ethic, passion, and initiative to be involved in the HashiCorp community. When I received a full-time offer from HashiCorp, I was thrilled to accept. At HashiCorp, I've been able to work on challenging projects, contribute to impactful customer-facing initiatives, and grow both personally and professionally. What advice do you have for women looking to excel in technology? Initially I struggled to voice my ideas, opinions, and concerns in meetings and discussions. However, once I overcame my fear of saying the wrong thing, I realized the impact sharing a thought can have on the overall success of my team. It’s about taking that first leap and slowly building up the confidence to advocate for yourself as well as paving a way for future generations of women in technology. My advice for women looking to excel in the technology field is rooted in my own experiences. First and foremost, believe in yourself and your capabilities. Confidence in your skills will propel you forward, even in the face of doubt or adversity. Having a positive attitude towards solving challenges will help you in situations where you don’t have the answers right away. Seek out mentors and allies who can offer support and guidance as you navigate your career path. These connections can provide valuable insights and help you overcome obstacles along the way. Caroline Belchamber, Account Manager, London What advice do you have for women looking to excel in technology? Listen, listen, listen and ask questions, even ones that you may feel are stupid. Those are the questions that could give you the clarity you need to start building knowledge and opinions. Putting ego aside, ask others to “explain it to me like I’m five years old” or “draw it for me” (if you’re a visual learner). It’s amazing how much people who have technical knowledge appreciate imparting that knowledge on others. What challenges have you faced in your career and how have you overcome them? As a woman working in IT for the past 15 years I’ve had a range of experiences; from being told to “make the tea” to being spat at for telling someone they were wrong. (This genuinely happened at a conference when I was demoing a product on a big screen). To overcome these negative experiences, one simply has to think, “What is going on in that person’s life to warrant such behavior?”, then take a deep breath and move on. Who has inspired or influenced your career path? I instantly thought of a handful of people that I have been lucky enough to meet, work with, and remain friends with, including: Tanya Helin, CRO at AutoRABIT; Carol Swartz, Director of Partner Development at Microsoft; HashiCorp’s Heather Potter, Vice President and Associate General Counsel, and Meghan Liese, Vice President of Product Marketing. Do you have any tips for maintaining work-life balance? Set boundaries based on what is important to you and your family. Turn off access to systems (Slack, email, etc.) to reduce the temptation to log back on. View the full article -
As more customers use multiple cloud services or microservices, they face the difficulty of consistently managing and connecting their services across various environments, including on-premises, different clouds, and existing legacy systems. HashiCorp Consul's service mesh addresses this challenge by securely and consistently connecting applications on any runtime, network, cloud platform, or on-premises setup. In the Google Cloud ecosystem, Consul can be deployed across Google Kubernetes Engine (GKE) and Anthos GKE. Now, Consul 1.16 is also supported on GKE Autopilot, Google Cloud’s fully managed Kubernetes platform for containerized workloads. Consul 1.17 is currently on track to be supported on GKE Autopilot later this year. Benefits of GKE Autopilot In 2021, Google Cloud introduced GKE Autopilot, a streamlined configuration for Kubernetes that follows GKE best practices, with Google managing the cluster configuration. Reducing the complexity that comes with workloads using Kubernetes, Google’s GKE Autopilot simplifies operations by managing infrastructure, control plane, and nodes, while reducing operational and maintenance costs. Consul is the latest partner product to be generally available, fleet-wide, on GKE Autopilot. By deploying Consul on GKE Autopilot, customers can connect services and applications across clouds, platforms, and services while realizing the benefits of a simplified Kubernetes experience. The key benefits of using Autopilot include more time to focus on building your application, a strong security posture out-of-the-box, and reduced pricing — paying only for what you use: Focus on building and deploying your applications: With Autopilot, Google manages the infrastructure using best practices for GKE. Using Consul, customers can optimize operations through centralized management and automation, saving valuable time and resources for developers. Out-of-the-box security: With years of Kubernetes experience, GKE Autopilot implements GKE-hardening guidelines and security best practices, while blocking features deemed less safe (i.e. privileged pod- and host-level access). As a part of HashiCorp’s zero trust security solution, Consul enables least-privileged access by using identity-based authorization and service-to-service encryption. Pay-as-you-go: GKE Autopilot’s pricing model simplifies billing forecasts and attribution because it's based on resources requested by your pods. Visit the Google Cloud and HashiCorp websites to learn more about GKE Autopilot pricing and HashiCorp Consul pricing. Deploying Consul on GKE Autopilot Deploying Consul on GKE Autopilot facilitates service networking across a multi-cloud environment or microservices architecture, allowing customers to quickly and securely deploy and manage Kubernetes clusters. With Consul integrated across Google Cloud Kubernetes, including GKE, GKE Autopilot, and Anthos GKE, Consul helps bolster application resilience, increase uptime, accelerate application deployment, and improve security across service-to-service communications for clusters, while reducing overall operational load. Today, you can deploy Consul service mesh on GKE Autopilot using the following configuration for Helm in your values.yaml file: global: name: consul connectInject: enabled: true cni: enabled: true logLevel: info cniBinDir: "/home/kubernetes/bin" cniNetDir: "/etc/cni/net.d"In addition, if you are using a Consul API gateway for north-south traffic, you will need to configure the Helm chart so you can leverage the existing Kubernetes Gateway API resources provided by default when provisioning GKE Autopilot. We recommend the configuration shown below for most deployments on GKE Autopilot as it provides the greatest flexibility by allowing both API gateway and service mesh workflows. Refer to Install Consul on GKE Autopilot for more information. global: name: consul connectInject: enabled: true apiGateway: manageExternalCRDs: false manageNonStandardCRDs: true cni: enabled: true logLevel: info cniBinDir: "/home/kubernetes/bin" cniNetDir: "/etc/cni/net.d"Learn more You can learn more about the process that Google Cloud uses to support HashiCorp Consul workloads on GKE Autopilot clusters with this GKE documentation and resources page. Here’s how to get started on Consul: Learn more in the Consul documentation. Begin using Consul 1.16 by installing the latest Helm chart, and learn how to use a multi-port service in Consul on Kubernetes deployments. Try Consul Enterprise by starting a free trial. Sign up for HashiCorp-managed HCP Consul. View the full article
-
Before I automate the installation and configuration of a tool, I log into a virtual machine and run each command separately, eventually building a server with the correct configuration and packages. When I teach how to use tools like HashiCorp Vault and Consul on live streams, I encourage the use of manual commands to reinforce important concepts and build tacit knowledge when operating and using a tool. However, I also need a way to help my co-host learn the tools without making them depend too much on pre-existing knowledge of AWS and Terraform. I also need to automate their manual commands to set up for the next streaming episode. HashiCorp Boundary is a secure remote access solution that provides an easy way to allow access to applications and critical systems with fine-grained authorizations based on trusted identities. Boundary helps me grant temporary access to my co-host on a live stream without them needing to fully understand AWS or Terraform. In this post, I’ll show you how I use Boundary to grant temporary access to my co-host, record their manual commands on a live stream, and reconcile the commands into automation written in Terraform. At the end of the stream, I play back a session recording and use the configuration to automate the next episode. This workflow of making manual break-glass changes to an endpoint and reconciling the changes to automation applies to any automation you build. Grant temporary access to servers Break-glass changes involve granting temporary access to log in to a system to make emergency changes. When making a live video, I need to collaborate with my co-host, Melissa Gurney (Director of Community Development), and grant her temporary access to a set of virtual machines during the episode. I set up HashiCorp Cloud Platform (HCP) Boundary and create a self-managed Boundary worker to help proxy into EC2 instances on AWS. On the stream, Melissa uses Boundary Desktop to target a specific server without needing to download its SSH key or pass in a specific username. Prior to using Boundary, my co-host and I would share Amazon EC2 key pairs and label which ones logged into which instance. Now, Boundary automatically injects the SSH credentials from Vault. Melissa and I do not have direct access to the SSH keys, which further secures our environment and reduces the burden of downloading the keys for each EC2 instance. Some episodes require us to configure multiple servers. To help with this, I create a host set to logically group a set of Vault servers in Boundary, as they share a common function. Melissa selects which Vault server to configure based on the list of hosts. Sharing a screen on live video has its own security concerns. While we try to avoid showing root credentials in plaintext, we have to run commands that generate tokens and keys that we cannot easily mask. To mitigate the risk of exposing these credentials, I use Boundary to close Melissa’s sessions to each server at the end of each episode. Then, I use Terraform to create a new set of servers after each episode to revoke any tokens or keys. Reconcile manual commands into automation During the live stream, Melissa logs into different servers and runs several commands to configure a Vault server. Prior to using Boundary, my previous co-hosts and I had to remember to copy the history of commands off each server we configured in the episode. We would replay the entire two-hour episode to reverse engineer the history by putting the proper configuration and commands into a script. Now, I set up Boundary session recording to record each command Melissa runs on the server during the live stream. After the live stream, I find the session recording in Boundary and replay the commands. I directly copy the configuration into my automation for the next episode. For example, Melissa and I manually built a Vault server on one virtual machine instance. After the stream, I found the recording of the session on the Vault server. By reviewing the recording, I could copy a working Vault configuration and update it in the user data script for the EC2 instance. Even though manual commands require some editing for automation, I can quickly copy a tested sequence of commands and configuration and apply minor updates for automation. These updates include refactoring manual commands and configurations with hard coded IP addresses or EC2 instance identifiers to use templated or dynamically generated values. Learn more By granting temporary access to my co-host during the live stream and recording their manual commands with Boundary, I can track changes across multiple servers and replay them for future automation. Prior to using Boundary, I spent hours reviewing the episode and reconstructing the configuration we typed on stream. Now, it takes less than an hour to copy the configuration and refactor it for automation. As an additional benefit, I can always return to the session recording and verify the manual commands in case I need to build new automation. The workflow I use for live streams applies to reconstructing any break-glass change you make to production. By using Boundary to control SSH access to servers in production, you can offer on-demand, time-limited access during break-glass changes. Rather than reverse engineer your commands, you can use a session recording to more efficiently copy your changes into automation after you stabilize the system. To learn more, sign up for HCP Boundary and get started with Boundary session recording. Review our tutorial to enable it on your own Boundary cluster and configure SSH credential injection from Vault. To get a live demonstration of how we use Boundary, tune into our video series for Getting into Vault and check out the repository we use for setting up each episode. View the full article
-
HashiCorp’s approach to identity-based security provides a solid foundation for companies to safely migrate and secure infrastructure, applications, and data as they move to multi-cloud environments. The new Vault partner integrations highlighted here extend the HashiCorp ecosystem into new security use cases. Vault Enterprise We’re pleased to announce four new Vault Enterprise integrations: CloudBees CloudBees provides a software delivery platform for enterprises enabling them to increase development velocity with reduced risk. To help better secure these applications for customers, the company has launched a new Vault Jenkins plugin, allowing organizations to more easily integrate secrets management to their software CI/CD pipeline. In addition to supporting both Vault Community and Enterprise editions, this Jenkins plugin also supports HCP Vault. Futurex Futurex, a provider of hardened, enterprise-class data security solutions, has integrated with HashiCorp Vault's managed keys feature so customers can leverage the enhanced encryption and key management capabilities of the Futurex KMES Series 3. The managed keys feature (which delegates handling, storing, and interacting with private key material to a trusted external key management service (KMS)) empowers organizations to strengthen security for handling, storing, and interacting with private key material when required by business or compliance requirements. These managed keys are used in Vault’s PKI secrets engine for offloading privileged PKI operations to the hardware security module (HSM). Scality Scality are the providers of RING, a software-defined, scale-out storage management solution. RING is now integrated with Vault for external key management. With this long awaited integration utilizing the KMIP protocol, end users of RING can now utilize Vault Enterprise to provide encryption keys to encrypt storage objects while hardening their storage security posture. Securosys Securosys, a solutions provider in cybersecurity and encryption completed a new Vault plugin that implements a platform-agnostic, REST-based HSM interface, eliminating connectivity hurdles by using secure web connections (TLS). Building off existing strengths between Securosys and Vault Enterprise, this new integration employs the new REST interface to take advantage of Vault’s managed key features to further harden Vault and offload cryptographic operations to the cloud-aware Primus HSM. Vault Community We’re pleased to announce a pair of new HashiCorp Vault Community integrations: Coder Coder, a self-hosted remote development platform that shifts software development from local machines to the cloud, now leverages HashiCorp Vault to securely store and retrieve sensitive secrets required in the development pipeline, ensuring the right secrets are available to the right entities at the right time. KeepHQ KeepHQ, is an open source AIOps data aggregation platform that acts as a single pane of glass to the myriad alerts and logs in any environment, minimizing alert fatigue. Keep can now utilize a customer’s instance of Vault to access secrets for applications, providing alerts and notifications to help operators optimize the aggregation pipeline. HCP Vault In addition to the previously listed CloudBees HCP Vault integration, we’re pleased to announce two additional HCP Vault integrations: Automation Anywhere Automation Anywhere’s Automation 360, which delivers secure enterprise automation alongside process intelligence to improve operational efficiency, can now retrieve credentials from HCP Vault or Vault Enterprise. As new bots and processes are created within Automation 360, customers no longer need to manage separate credentials for each application in the process pipeline as this blocker is removed and offloaded through a seamless integration that allows the customer’s own instance of Vault to manage and rotate these secrets, providing a better security posture for the customer. New Relic New Relic, which builds observability solutions to help engineering teams analyze and troubleshoot their applications and services, announced their new HCP Vault integration with New Relic Instant Observability. Metrics and logs observability are crucial for ensuring the performance and security of your HCP Vault cluster and essential for understanding client-related usage. The HCP Vault integration with New Relic IO provides an out-of-the-box dashboard for HCP Vault metrics and audit logs to quickly and easily gain deep insight into the health of your HCP Vault environment. Join the Vault Integration Program To learn more about the Vault Integration Program and apply to become a validated partner, please visit our Become a Partner page. View the full article
-
Many container images use Alpine Linux as their base operating system. When you build your own container image, you include the installation of packages in a Dockerfile (Containerfile). While you can use the official container images for HashiCorp tools, you may need to build your own container image with additional dependencies to apply HashiCorp Terraform in a CI/CD pipeline, run HashiCorp Vault or Consul on a workload orchestrator, or deploy HashiCorp Boundary in containers. This post demonstrates how to install the official release binaries for HashiCorp tools on Alpine Linux for container images. We’re sharing these instructions because although HashiCorp supports official repositories for many operating systems and distributions, including various Linux distributions, Alpine Linux users must download the tools from precompiled binaries on the HashiCorp release site. The binaries are not available through Alpine Package Keeper. Build a container image You can download the binary for any HashiCorp tool on the HashiCorp release site. Use the release site to download a specific product and its version for a given operating system and architecture. For Alpine Linux, use the product binary compiled for Linux AMD64: FROM alpine:latest ARG PRODUCT ARG VERSION RUN apk add --update --virtual .deps --no-cache gnupg && \ cd /tmp && \ wget https://releases.hashicorp.com/${PRODUCT}/${VERSION}/${PRODUCT}_${VERSION}_linux_amd64.zip && \ wget https://releases.hashicorp.com/${PRODUCT}/${VERSION}/${PRODUCT}_${VERSION}_SHA256SUMS && \ wget https://releases.hashicorp.com/${PRODUCT}/${VERSION}/${PRODUCT}_${VERSION}_SHA256SUMS.sig && \ wget -qO- https://www.hashicorp.com/.well-known/pgp-key.txt | gpg --import && \ gpg --verify ${PRODUCT}_${VERSION}_SHA256SUMS.sig ${PRODUCT}_${VERSION}_SHA256SUMS && \ grep ${PRODUCT}_${VERSION}_linux_amd64.zip ${PRODUCT}_${VERSION}_SHA256SUMS | sha256sum -c && \ unzip /tmp/${PRODUCT}_${VERSION}_linux_amd64.zip -d /tmp && \ mv /tmp/${PRODUCT} /usr/local/bin/${PRODUCT} && \ rm -f /tmp/${PRODUCT}_${VERSION}_linux_amd64.zip ${PRODUCT}_${VERSION}_SHA256SUMS ${VERSION}/${PRODUCT}_${VERSION}_SHA256SUMS.sig && \ apk del .depsThe example Dockerfile includes build arguments for the product and version. Use these arguments to install the HashiCorp tool of your choice. For example, you can use this Dockerfile to create an Alpine Linux base image with Terraform version 1.7.2: docker build --build-arg PRODUCT=terraform \ --build-arg VERSION=1.7.2 \ -t joatmon08/terraform:test .You can run a container with the new Terraform base image and issue Terraform commands: $ docker run -it joatmon08/terraform:test terraform -help Usage: terraform [global options] [args] The available commands for execution are listed below. The primary workflow commands are given first, followed by less common or more advanced commands. Main commands: init Prepare your working directory for other commands validate Check whether the configuration is valid plan Show changes required by the current configuration apply Create or update infrastructure destroy Destroy previously-created infrastructure ## omitted for clarityThe example Dockerfile includes commands to download the release’s checksum and signature. Use the signature to verify the checksum and the checksum to validate the archive file. This workflow requires the gnupg package to verify HashiCorp’s signature on the checksum. The Dockerfile installs gnupg and deletes it after installing the release. While the example Dockerfile verifies and installs a product’s official release binary, it does not include dependencies to run the binary. For example, HashiCorp Nomad requires additional packages such as gcompat. Be sure to install any additional dependencies that your tools require in your container image before running a container for it. Learn more If you need to use a HashiCorp tool in your own container, download and unarchive the appropriate release binaries from our release site. Include verification of the signature and a checksum for the download to ensure its integrity. This installation and verification workflow applies to any Linux distribution that does not include HashiCorp software in its package repository. Review our official release channels to download and install HashiCorp products on other platforms and architectures. We release official container images for each product in DockerHub under the HashiCorp namespace. View the full article
-
Thanks to Andre Newman, Senior Reliability Specialist at Gremlin, for his assistance creating this blog post. Chaos engineering is a modern, innovative approach to verifying your application's resilience. This post shows how to apply chaos engineering concepts to HashiCorp Vault using Gremlin and Vault stress testing tools to simulate disruptive events. You’ll learn how to collect performance benchmarking results and monitor key metrics. And you’ll see Vault operators can use the results of the tests to iteratively improve resilience and performance in Vault architectures. Running these tests will help you identify reliability risks with Vault before they can bring down your critical apps. What is HashiCorp Vault HashiCorp Vault is an identity-based secrets and encryption management system. A secret is anything that you want to tightly control access to, such as API encryption keys, passwords, and certificates. Vault has a deep and broad ecosystem with more than 100 partners and integrations, and it is used by 70% of the top 20 US banks. Chaos engineering and Vault Because Vault stores and handles secrets for mission-critical applications, it is a primary target for threat actors. Vault is also a foundational system that keeps your applications running. Once you’ve migrated the application secrets into Vault, if all Vault instances go down, the applications receiving secrets from Vault won’t be able to run. Any compromise or unavailability of Vault could result in significant damage to an organization’s operations, reputation, and finances. Organizations need to plan for and mitigate several possible types of Vault failures, including: Code and configuration changes that affect application performance Lost the leader node Vault cluster lost quorum The primary cluster is unavailable High load on Vault clusters To mitigate these risks, teams need a more modern approach to testing and validating Vault’s resilience. This is where chaos engineering comes in. Chaos engineering aims to help improve systems by identifying hidden problems and reliability risks. This is done by injecting faults — such as high CPU usage or network latency — into systems, observing how the system responds, and then using that information to improve the system. This post illustrates this by creating and running chaos experiments using Gremlin, a chaos engineering platform. Chaos engineering brings multiple benefits, including: Improving system performance and resilience Exposing blind spots using monitoring, observability, and alerts Proactively validating the resilience of the system in the event of failure Learning how systems handle different failures Preparing and educating the engineering team for actual failures Improving architecture design to handle failures HashiCorp Vault architecture Vault supports a multi-server mode for high availability. This mode protects against outages by running multiple Vault servers. High availability (HA) mode is automatically enabled when using a data store that supports it. When running in HA mode, Vault servers have two states: standby and active. For multiple Vault servers sharing a storage backend, only a single instance is active at any time. All standby instances are placed in hot standbys. Only the active server processes all requests; the standby server redirects all requests to an active Vault server. Meanwhile, if the active server is sealed, fails, or loses network connectivity, then one of the standby Vault servers becomes the active instance. Vault service can continue to operate, provided that a quorum of available servers remains online. Read more about performance standby nodes in our documentation. What is chaos engineering? Chaos engineering is the practice of finding reliability risks in systems by deliberately injecting faults into those systems. It helps engineers and operators proactively find shortcomings in their systems, services, and architecture before an outage hits. With the knowledge gained from chaos testing, teams can address shortcomings, verify resilience, and create a better customer experience. For most teams, chaos engineering leads to increased availability, lower mean time to resolution (MTTR), lower mean time to detection (MTTD), fewer bugs shipped to the product, and fewer outages. Teams who often run chaos engineering experiments are also more likely to surpass 99.9% availability. Despite the name, the goal of injecting faults isn't to create chaos but to reduce chaos by surfacing, identifying, and fixing problems. Chaos engineering also is not random or uncontrolled testing. It’s a methodical approach that involves planning and forethought. That means when injecting faults, you need to plan out experiments beforehand and ensure there is a way to halt experiments, whether manually or by using health checks to check the state of systems during an experiment. Chaos engineering is not an alternative to unit tests, integration tests, or performance benchmarking. It works complementary to them, and even in parallel. For example: running chaos engineering tests and performance tests simultaneously can help find problems that occur only under load. This increases the likelihood of finding reliability issues that might surface in production or during high-traffic events. The 5 stages of chaos engineering A chaos engineering experiment follows five main steps: Create a hypothesis Define and measure your system’s steady state Create and run a chaos experiment Observe your system’s response to the experiment Use your observations to improve the system 1. Create a hypothesis A hypothesis is an educated guess about how your system will behave under certain conditions. How do you expect your system to respond to a type of failure? For example, if Vault loses the leader node in a three-node cluster, Vault should continue responding to requests, and another node should be elected as the leader. When forming a hypothesis, start small: focus on one specific part of your system. This makes it easier to test that specific system without impacting other systems. 2. Measure your steady state A system’s steady state is its performance and behavior under normal conditions. Determine the metrics that best indicate your system’s reliability and monitor those under conditions that your team considers normal. This is the baseline that you’ll compare your experiment's results against. Examples of steady-state metrics include Vault.core.handle_login_request and vault.core.handle_request. See our well architected framework for more key metrics. 3. Create and run a chaos experiment This is where you define the parameters of your experiment. How will you test your hypothesis? For example, when testing a Vault application’s response time, you could use a latency experiment to create a slow connection. This is also where you define abort conditions, which are conditions that indicate you should stop the experiment. For example, if the Vault application latency rises above the experimental threshold values, you should immediately stop the experiment so you can address those unexpected results. Note that an abort doesn’t mean the experiment failed; it just means you discovered a different reliability risk than the one you were testing for. Once you have your experiment and abort conditions defined, you can build the experimentation systems using Gremlin. 4. Observe the impact While the experiment is running, monitor your application’s key metrics. See how they compare to your steady state, and interpret what they mean for the test. For example, if running a blackhole on your Vault cluster causes CPU usage to increase rapidly, you might have an overly aggressive response time on API requests. Or, the web app might start delivering HTTP 500 errors to users instead of user-friendly error messages. In both cases, there’s an undesirable outcome that you need to address. 5. Iterate and improve Once you’ve reviewed the outcomes and compared the metrics, fix the problem. Make any necessary changes to your application or system, deploy the changes, and then validate that your changes fix the problem by repeating this process. This is how you iteratively make your system more resilient; a better approach than trying to make sweeping, application-wide fixes all at once. Implementation The next section runs through four experiments to test a Vault cluster. Before you can run these experiments, you’ll need the following. Prerequisites: A Vault HA cluster A Gremlin account (Sign up for free for 30-days.) The Vault benchmarking tool Organizational awareness (let others know you’re running experiments on this cluster) Basic monitoring Experiment 1: Impact of losing a leader node In the first experiment, you’ll test whether Vault can continue responding to requests if a leader node becomes unavailable. If the active server is sealed, fails, or loses network connectivity, one of the standby Vault servers becomes the active instance. You’ll use a blackhole experiment to drop network traffic to and from the leader node and then monitor the cluster. Hypothesis: If Vault loses the leader node in a three-node cluster, Vault should continue responding to requests, and another node should be elected to leader. Get a steady state from the monitoring tool Our steady state is based on three metrics: The sum of all requests handled by Vault vault.core.handle_login_request vault.core.handle_request Below graphs shows the sum of requests oscillates around 20K, while handle_login_request and handle_request hover between metrics 1 and 3. Run the experiment: This experiment runs a blackhole experiment for 300 seconds (5 minutes) on a leader node. Blackhole experiments block network traffic from a host and are great for simulating any number of network failures, including misconfigured firewalls, network hardware failures, etc. Setting it for 5 minutes gives us enough time to measure the impact and observe any response from Vault. Here, you can see the ongoing status of the experiment in Gremlin: Observe This experiment uses Datadog for metrics. The graphs below show that Vault is responding to requests with a negligible impact on throughput. This means Vault’s standby node kicked in and was elected as the new leader. You can confirm this by checking the nodes in your cluster using Vault operator raft command: Improve cluster design for resilience Based on these results, no immediate changes are needed, but there’s an opportunity to scale up this test. What happens if two nodes fail? Or all three? If this is a genuine concern for your team, try repeating this experiment and selecting additional nodes. You might try scaling up your cluster to four nodes instead of three — how does this change your results? Keep in mind that Gremlin provides a Halt button for stopping an ongoing experiment ife something unexpected happens. Remember your abort conditions, and don’t be afraid to stop an experiment if those conditions are met. Experiment 2: Impact of losing quorum The next experiment tests whether Vault can continue responding to requests if there is no quorum, using a blackhole experiment to bring two nodes offline. In such a scenario, Vault is unable to add or remove a node or commit additional log entries, resulting in unavailability. This HashiCorp runbook documents the steps needed to bring the cluster back online, which this experiment tests. Hypothesis If Vault loses the quorum, Vault should stop responding to requests. Following our runbook should bring the cluster back online in a reasonable amount of time. Get a steady state from Vault The steady state for this experiment is simple: Does Vault respond to requests? We’ll test this by retrieving a key: Run the experiment Run another blackhole experiment in Gremlin, this time targeting two nodes in the cluster. Observe Now that the nodes are down, the Vault cluster has lost the quorum. Without a quorum, read and write operations cannot be performed within the cluster. Retrieving the same key returns an error this time: Recovery drill and improvements Follow the HashiCorp runbook to recover from the loss of two of the three Vault nodes by converting it into a single-node cluster. It takes a few minutes to bring the cluster online, but it works as a temporary measure. A long-term fix might be to adopt a multi-datacenter deployment where you can replicate data across multiple datacenters for performance as well as disaster recovery (DR). HashiCorp recommends using DR clusters to avoid outages and meet service level agreements (SLAs). Experiment 3: Testing how Vault handles latency This next experiment tests Vault’s ability to handle high-latency, low-throughput network connections. You test this by adding latency to your leader node, then observing request metrics to see how Vault’s functionality is impacted. Hypothesis Introducing latency on your cluster’s leader node shouldn’t cause any application timeouts or cluster failures. Get KPIs from monitoring the tool This experiment uses the same Datadog metrics as the first experiment: vault.core.handle_login request, and vault.core.handle_request. Run the experiment This time, use Gremlin to add latency. Instead of running a single experiment, create a Scenario, which lets you run multiple experiments sequentially. Gradually increase latency from 100ms to 200ms over 4 minutes, with 5-second breaks in between experiments. (This Gremlin blog post explains how a latency attack works.) Observe In our test, the experiment introduced some delays in response time, especially in the 95th and 99th percentiles, but all requests were successful. More importantly, our cluster is stable from key metrics below: Improve cluster design for resilience To make the cluster even more resilient, add non-voter nodes to the cluster. A non-voting node has all of Vault's data replicated but does not contribute to the quorum count. This can be used with performance standby nodes to add read scalability to a cluster in cases where a high volume of reads to servers is needed. This way, if one or two nodes have poor performance, or if a large volume of reads saturates a node, these standby nodes can kick in and maintain performance. Experiment 4: Testing how Vault handles memory pressure This final experiment tests Vault’s ability to handle reads during high memory pressure. Hypothesis If you consume memory on a Vault cluster’s leader node, applications should switch to reading from performance standby nodes. This should have no impact on performance. Get metrics from monitoring tool For this experiment, graphs below gather telemetry metrics directly from Vault nodes; specifically, memory allocated to and used by Vault. Run the experiment Run a memory experiment to consume 99% of Vault’s memory for 5 minutes. This pushes memory usage on the leader node to its limit and holds it there until the experiment ends (or you abort). Observe In this example, the leader node kept running, and while there were minor delays in response time, all requests were successful as seen in the graph below. This means our cluster can tolerate high memory usage well. Improve cluster design for resilience As in the previous experiment, you can use non-voter nodes and performance standby nodes to add compute capacity to your cluster if needed. These nodes add extra memory but don’t contribute to the quorum count. If your cluster runs low on memory, you can add these nodes until usage drops again. Other experiments that might be beneficial include DDOS attacks, cluster failover, and others. How to build chaos engineering culture Teams typically think of reliability in terms of technology and systems. In reality, reliability starts with people. Getting application developers, site reliability engineers (SREs), incident responders, and other team members to think proactively about reliability is how you start building a culture of reliability. In a culture of reliability, each member of the organization works toward maximizing the availability of their services, processes, and people. Team members focus on improving the availability of their services, reducing the risk of outages, and responding to incidents as quickly as possible to reduce downtime. Reliability culture ultimately focuses on a single goal: providing the best possible customer experience. In practice, building a reliability culture requires several steps, including: Introducing the concept of chaos engineering to other teams Showing the value of chaos engineering to your team (you can use the results of these experiments as prooft) Encouraging teams to focus on reliability early in the software development lifecycle, not just at the end Building a team culture that encourages experimentation and learning, not assigning blame for incidents Adopting the right tools and practices to support chaos engineering Using chaos engineering to regularly test systems and processes, automate experiments, and run organized team reliability events (often called “Game Days”) To learn more about adopting chaos engineering practices, read Gremlin’s guide: How to train your engineers in chaos engineering or this S&P Global case study. Learn more One of the biggest challenges in adopting a culture of reliability is maintaining the practice. Reliability isn’t can’t be achieved in a single action: it has to be maintained and validated regularly, and reliability tools need to both enable and support this practice. Chaos engineering is a key component of that. Run experiments on HashiCorp Vault clusters, automate reliability testing, and keep operators aware of the reliability risks in their systems. Want to see how Indeed.com manages Vault reliability testing? Watch our video All the 9s: Keeping Vault resilient and reliable from HashiConf 2023. If you use HashiCorp Consul, check out our tutorial and interactive lab on Consul and chaos engineering. View the full article
-
We are pleased to announce the release of HashiCorp Boundary 0.15, which adds session recording storage policies (HCP Plus/Enterprise) and desktop/CLI client improvements like search and filtering. Boundary is a modern privileged access management (PAM) solution that was designed for and thrives in dynamic environments. Boundary streamlines end user access to infrastructure resources without exposing credentials or internal network topologies. Recent initiatives were aimed to improve governance and useability. As a result, previous releases included features like SSH session recording and an embedded terminal in the desktop client. We continue this effort in our latest 0.15 release and are excited for users to try it out themselves. Session recording storage policies (HCP Plus/Enterprise) Introduced in Boundary 0.13, SSH session recording helped organizations meet their compliance objectives by recording detailed end user activities and commands. Those SSH session recordings are then stored in the organization’s designated Amazon S3 buckets. Boundary 0.15 improves storage governance by allowing administrators to set retention and deletion policies for session recordings. This helps ensure that recordings are available and accessible for the desired retention period, ensuring that teams can meet various regulatory requirements. This feature also helps reduce management and storage costs by automatically deleting recordings at the designated time and date. Improvements to the Boundary Desktop/CLI client Boundary 0.15 improvements include search and filtering capabilities, session time indicators, and support for ARM64 architectures. Search and filtering Recent improvements to the Boundary Desktop client have dramatically simplified the end user experience. However, at a large scale, some end users may be authorized to connect to tens or hundreds of target resources. This makes it difficult to locate a specific target in a long list. Similarly, finding a specific session among tens or hundreds of active sessions can also be challenging. The desktop and CLI client in Boundary 0.15 includes new search and filter capabilities to help users locate their desired targets and sessions. Users simply search for the full or partial names or IDs of the desired target and can further narrow down the results by filtering out the scopes or session states (active, pending, or terminated). Larger result sets are paginated for improved search performance. We expect this subtle addition to noticeably improve the user experience and reduce the time it takes to locate and connect to a target. Session time indicator Our goal with Boundary Desktop is to centralize the experience of connecting to any resource on your network, for any type of user. Upon establishing a session, end users often can’t tell how long their sessions will last. That information has now been added in version 1.8 of the Boundary Desktop client. A time-remaining helper now appears at the top of the session, giving users a sense of how long their session will be valid for. This also paves the way for future features, such as approvals and session renewals. Support for ARM64 architectures Prior to this release, Boundary did not support Darwin ARM64/Apple silicon builds. Version 1.8 of the Boundary Desktop client, adds support for ARM64 architectures. Download the Boundary Desktop client here. Minor improvements and bug fixes We have also made various minor improvements and addressed bugs uncovered since the latest release. Improvements include grant scopes for roles and new commands for the CLI which simplify and reduce the required sub-commands. For more information, please view the changelog. Get started with Boundary 0.15 We are excited for users to try the new governance and usability features available in Boundary 0.15. Administrators can deploy a HashiCorp-managed Boundary cluster using the HashiCorp Cloud Platform (HCP) or a self-managed Boundary cluster using Boundary’s Community or Enterprise versions. Check out these resources to get started: Sign up for a free HCP Boundary account. For self-managed versions, download Boundary 0.15. Download the free Boundary Desktop client. Watch this Getting Started with HCP Boundary demo video. Get up and running quickly with our Get Started with HCP Boundary tutorial. Read the documentation for storage policies and Boundary CLI search functions. To request a Boundary Enterprise trial, contact HashiCorp sales. View the full article
-
- hashicorp
- storage policies
-
(and 1 more)
Tagged with:
-
HashiCorp and Microsoft have partnered to create Terraform modules that follow Microsoft's Azure Well-Architected Framework and best practices. In a previous blog post, we demonstrated how to accelerate AI adoption on Azure with Terraform. This post covers how to use a simple three-step process to build, secure, and enable OpenAI applications on Azure with HashiCorp Terraform and Vault. The code for this demo can be found on GitHub. You can leverage the Microsoft application outlined in this post and the Microsoft Azure Kubernetes Service (AKS) to integrate with OpenAI. You can also read more about how to deploy an application that uses OpenAI on AKS on the Microsoft website. Key considerations of AI The rise in AI workloads is driving an expansion of cloud operations. Gartner predicts that cloud infrastructure will grow 26.6% in 2024, as organizations deploying generative AI (GenAI) services look to the public cloud. To create a successful AI environment, orchestrating the seamless integration of artificial intelligence and operations demands a focus on security, efficiency, and cost control. Security Data integration, the bedrock of AI, not only requires the harmonious assimilation of diverse data sources but must also include a process to safeguard sensitive information. In this complex landscape, the deployment of public key infrastructure (PKI) and robust secrets management becomes indispensable, adding cryptographic resilience to data transactions and ensuring the secure handling of sensitive information. For more information on the HashiCorp Vault solution, see our use-case page on Automated PKI infrastructure Machine learning models, pivotal in anomaly detection, predictive analytics, and root-cause analysis, not only provide operational efficiency but also serve as sentinels against potential security threats. Automation and orchestration, facilitated by tools like HashiCorp Terraform, extend beyond efficiency to become critical components in fortifying against security vulnerabilities. Scalability and performance, guided by resilient architectures and vigilant monitoring, ensure adaptability to evolving workloads without compromising on security protocols. Efficiency and cost control In response, platform teams are increasingly adopting infrastructure as code (IaC) to enhance efficiency and help control cloud costs. HashiCorp products underpin some of today’s largest AI workloads, using infrastructure as code to help eliminate idle resources and overprovisioning, and reduce infrastructure risk. Automation with Terraform This post delves into specific Terraform configurations tailored for application deployment within a containerized environment. The first step looks at using IaC principles to deploy infrastructure to efficiently scale AI workloads, reduce manual intervention, and foster a more agile and collaborative AI development lifecycle on the Azure platform. The second step focuses on how to build security and compliance into an AI workflow. The final step shows how to manage application deployment on the newly created resources. Prerequisites For this demo, you can use either Azure OpenAI service or OpenAI service: To use Azure OpenAI service, enable it on your Azure subscription using the Request Access to Azure OpenAI Service form. To use OpenAI, sign up on the OpenAI website. Step one: Build First let's look at the Helm provider block in main.tf: provider "helm" { kubernetes { host = azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.host username = azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.username password = azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.password client_certificate = base64decode(azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.client_certificate) client_key = base64decode(azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.client_key) cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.cluster_ca_certificate) } }This code uses information from the AKS resource to populate the details in the Helm provider, letting you deploy resources into AKS pods using native Helm charts. With this Helm chart method, you deploy multiple resources using Terraform in the helm_release.tf file. This file sets up HashiCorp Vault, cert-manager, and Traefik Labs’ ingress controller within the pods. The Vault configuration shows the Helm set functionality to customize the deployment: resource "helm_release" "vault" { name = "vault" chart = "hashicorp/vault" set { name = "server.dev.enabled" value = "true" } set { name = "server.dev.devRootToken" value = "AzureA!dem0" } set { name = "ui.enabled" value = "true" } set { name = "ui.serviceType" value = "LoadBalancer" } set { name = "ui.serviceNodePort" value = "null" } set { name = "ui.externalPort" value = "8200" } }In this demo, the Vault server is customized to be in Dev Mode, have a defined root token, and enable external access to the pod via a load balancer using a specific port. At this stage you should have created a resource group with an AKS cluster and servicebus established. The containerized environment should look like this: If you want to log in to the Vault server at this stage, use the EXTERNAL-IP load balancer address with port 8200 (like this: http://[EXTERNAL_IP]:8200/) and log in using AzureA!dem0. Step two: Secure Now that you have established a base infrastructure in the cloud and the microservices environment, you are ready to configure Vault resources to integrate PKI into your environment. This centers around the pki_build.tf.second file, which you need to rename to remove the .second extension and make it executable as a Terraform file. After performing a terraform apply, as you are adding to the current infrastructure, add the elements to set up Vault with a root certificate and issue this within the pod. To do this, use the Vault provider and configure it to define a mount point for the PKI, a root certificate, role cert URL, issuer, and policy needed to build the PKI: resource "vault_mount" "pki" { path = "pki" type = "pki" description = "This is a PKI mount for the Azure AI demo." default_lease_ttl_seconds = 86400 max_lease_ttl_seconds = 315360000 } resource "vault_pki_secret_backend_root_cert" "root_2023" { backend = vault_mount.pki.path type = "internal" common_name = "example.com" ttl = 315360000 issuer_name = "root-2023" }Using the same Vault provider you can also configure Kubernetes authentication to create a role named "issuer" that binds the PKI policy with a Kubernetes service account named issuer: resource "vault_auth_backend" "kubernetes" { type = "kubernetes" } resource "vault_kubernetes_auth_backend_config" "k8_auth_config" { backend = vault_auth_backend.kubernetes.path kubernetes_host = azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.host } resource "vault_kubernetes_auth_backend_role" "k8_role" { backend = vault_auth_backend.kubernetes.path role_name = "issuer" bound_service_account_names = ["issuer"] bound_service_account_namespaces = ["default","cert-manager"] token_policies = ["default", "pki"] token_ttl = 60 token_max_ttl = 120 }The role connects the Kubernetes service account, issuer, which is created in the default namespace with the PKI Vault policy. The tokens returned after authentication are valid for 60 minutes. The Kubernetes service account name, issuer, is created using the Kubernetes provider, discussed in step three, below. These resources are used to configure the model to use HashiCorp Vault to manage the PKI certification process. The image below shows how HashiCorp Vault interacts with cert-manager to issue certificates to be used by the application: Step three: Enable The final stage requires another tf apply as you are again adding to the environment. You now use app_build.tf.third to build an application. To do this you need to rename app_build.tf.third to remove the .third extension and make it executable as a Terraform file. Interestingly, the code in app_build.tf uses the Kubernetes provider resource kubernetes_manifest. The manifest values are the HCL (HashiCorp Configuration Language) representation of a Kubernetes YAML manifest. (We converted an existing manifest from YAML to HCL to get the code needed for this deployment. You can do this using Terraform’s built-in yamldecode() function or the HashiCorp tfk8s tool.) The code below represents an example of a service manifest used to create a service on port 80 to allow access to the store-admin app that was converted using the tfk8s tool: resource "kubernetes_manifest" "service_tls_admin" { manifest = { "apiVersion" = "v1" "kind" = "Service" "metadata" = { "name" = "tls-admin" "namespace" = "default" } "spec" = { "clusterIP" = "10.0.160.208" "clusterIPs" = [ "10.0.160.208", ] "internalTrafficPolicy" = "Cluster" "ipFamilies" = [ "IPv4", ] "ipFamilyPolicy" = "SingleStack" "ports" = [ { "name" = "tls-admin" "port" = 80 "protocol" = "TCP" "targetPort" = 8081 }, ] "selector" = { "app" = "store-admin" } "sessionAffinity" = "None" "type" = "ClusterIP" } } }Putting it all together Once you’ve deployed all the elements and applications, you use the certificate stored in a Kubernetes secret to apply the TLS configuration to inbound HTTPS traffic. In the example below, you associate "example-com-tls" — which includes the certificate created by Vault earlier — with the inbound IngressRoute deployment using the Terraform manifest: resource "kubernetes_manifest" "ingressroute_admin_ing" { manifest = { "apiVersion" = "traefik.containo.us/v1alpha1" "kind" = "IngressRoute" "metadata" = { "name" = "admin-ing" "namespace" = "default" } "spec" = { "entryPoints" = [ "websecure", ] "routes" = [ { "kind" = "Rule" "match" = "Host(`admin.example.com`)" "services" = [ { "name" = "tls-admin" "port" = 80 }, ] }, ] "tls" = { "secretName" = "example-com-tls" } } } }To test access to the OpenAI store-admin site, you need a domain name. You use a FQDN to access the site that you are going to protect using the generated certificate and HTTPS. To set this up, access your AKS cluster. The Kubernetes command-line client, kubectl, is already installed in your Azure Cloud Shell. You enter: kubectl get svc And should get the following output: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello LoadBalancer 10.0.23.77 20.53.189.251 443:31506/TCP 94s kubernetes ClusterIP 10.0.0.1 443/TCP 29h makeline-service ClusterIP 10.0.40.79 3001/TCP 4h45m mongodb ClusterIP 10.0.52.32 27017/TCP 4h45m order-service ClusterIP 10.0.130.203 3000/TCP 4h45m product-service ClusterIP 10.0.59.127 3002/TCP 4h45m rabbitmq ClusterIP 10.0.122.75 5672/TCP,15672/TCP 4h45m store-admin LoadBalancer 10.0.131.76 20.28.162.45 80:30683/TCP 4h45m store-front LoadBalancer 10.0.214.72 20.28.162.47 80:32462/TCP 4h45m traefik LoadBalancer 10.0.176.139 20.92.218.96 80:32240/TCP,443:32703/TCP 29h vault ClusterIP 10.0.69.111 8200/TCP,8201/TCP 29h vault-agent-injector-svc ClusterIP 10.0.31.52 443/TCP 29h vault-internal ClusterIP None 8200/TCP,8201/TCP 29h vault-ui LoadBalancer 10.0.110.159 20.92.217.182 8200:32186/TCP 29hLook for the traefik entry and note the EXTERNALl-IP (yours will be different than the one shown above). Then, on your local machine, create a localhost entry for admin.example.com to resolve to the address. For example on MacOS, you can use sudo nano /etc/hosts. If you need more help, search “create localhost” for your machine type. Now you can enter https://admin.example.com in your browser and examine the certificate. This certificate is built from a root certificate authority (CA) held in Vault (example.com) and is valid against this issuer (admin.example.com) to allow for secure access over HTTPS. To verify the right certificate is being issued, expand the detail on our browser and view the cert name and serial number: You can then check this in Vault and see if the common name and serial numbers match. Terraform has configured all of the elements using the three-step approach shown in this post. To test the OpenAI application, follow Microsoft’s instructions. Skip to Step 4 and use https://admin.example.com to access the store-admin and the original store-front load balancer address to access the store-front. DevOps for AI app development To learn more and keep up with the latest trends in DevOps for AI app development, check out this Microsoft Reactor session with HashiCorp Co-Founder and CTO Armon Dadgar: Using DevOps and copilot to simplify and accelerate development of AI apps. It covers how developers can use GitHub Copilot with Terraform to create code modules for faster app development. You can get started by signing up for a free Terraform Cloud account. View the full article
-
How do you know if you can run terraform apply to your infrastructure without negatively affecting critical business applications? You can run terraform validate and terraform plan to check your configuration, but will that be enough? Whether you’ve updated some HashiCorp Terraform configuration or a new version of a module, you want to catch errors quickly before you apply any changes to production infrastructure. In this post, I’ll discuss some testing strategies for HashiCorp Terraform configuration and modules so that you can terraform apply with greater confidence. As a HashiCorp Developer Advocate, I’ve compiled some advice to help Terraform users learn how infrastructure tests fit into their organization’s development practices, the differences in testing modules versus configuration, and approaches to manage the cost of testing. I included a few testing examples with Terraform’s native testing framework. No matter which tool you use, you can generalize the approaches outlined in this post to your overall infrastructure testing strategy. In addition to the testing tools and approaches in this post, you can find other perspectives and examples in the references at the end. The testing pyramid In theory, you might decide to align your infrastructure testing strategy with the test pyramid, which groups tests by type, scope, and granularity. The testing pyramid suggests that engineers write fewer tests in the categories at the top of the pyramid, and more tests in the categories at the bottom. Higher-level tests in the pyramid take more time to run and cost more due to the higher number of resources you have to configure and create. In reality, your tests may not perfectly align with the pyramid shape. The pyramid offers a common framework to describe what scope a test can cover to verify configuration and infrastructure resources. I’ll start at the bottom of the pyramid with unit tests and work my way up the pyramid to end-to-end tests. Manual testing involves spot-checking infrastructure for functionality and can have a high cost in time and effort. Linting and formatting While not on the test pyramid, you often encounter tests to verify the hygiene of your Terraform configuration. Use terraform fmt -check and terraform validate to format and validate the correctness of your Terraform configuration. When you collaborate on Terraform, you may consider testing the Terraform configuration for a set of standards and best practices. Build or use a linting tool to analyze your Terraform configuration for specific best practices and patterns. For example, a linter can verify that your teammate defines a Terraform variable for an instance type instead of hard-coding the value. Unit tests At the bottom of the pyramid, unit tests verify individual resources and configurations for expected values. They should answer the question, “Does my configuration or plan contain the correct metadata?” Traditionally, unit tests should run independently, without external resources or API calls. For additional test coverage, you can use any programming language or testing tool to parse the Terraform configuration in HashiCorp Configuration Language (HCL) or JSON and check for statically defined parameters, such as provider attributes with defaults or hard-coded values. However, none of these tests verify correct variable interpolation, list iteration, or other configuration logic. As a result, I usually write additional unit tests to parse the plan representation instead of the Terraform configuration. Configuration parsing does not require active infrastructure resources or authentication to an infrastructure provider. However, unit tests against a Terraform plan require Terraform to authenticate to your infrastructure provider and make comparisons. These types of tests overlap with security testing done via policy as code because you check attributes in Terraform configuration for the correct values. For example, your Terraform module parses the IP address from an AWS instance’s DNS name and outputs a list of IP addresses to a local file. At a glance, you don’t know if it correctly replaces the hyphens and retrieves the IP address information. variable "services" { type = map(object({ node = string kind = string })) description = "List of services and their metadata" } variable "service_kind" { type = string description = "Service kind to search" } locals { ip_addresses = toset([ for service, service_data in var.services : replace(replace(split(".", service_data.node)[0], "ip-", ""), "-", ".") if service_data.kind == var.service_kind ]) } resource "local_file" "ip_addresses" { content = jsonencode(local.ip_addresses) filename = "./${var.service_kind}.hcl" }You could pass an example set of services and run terraform plan to manually check that your module retrieves only the TCP services and outputs their IP addresses. However, as you or your team adds to this module, you may break the module’s ability to retrieve the correct services and IP addresses. Writing unit tests ensures that the logic of searching for services based on kind and retrieving their IP addresses remains functional throughout a module’s lifecycle. This example uses two sets of unit tests written in terraform test to check the logic generating the service’s IP addresses for each service kind. The first set of tests verify the file contents will have two IP addresses for TCP services, while the second set of tests check that the file contents will have one IP address for the HTTP service: variables { services = { "service_0" = { kind = "tcp" node = "ip-10-0-0-0" }, "service_1" = { kind = "http" node = "ip-10-0-0-1" }, "service_2" = { kind = "tcp" node = "ip-10-0-0-2" }, } } run "get_tcp_services" { variables { service_kind = "tcp" } command = plan assert { condition = jsondecode(local_file.ip_addresses.content) == ["10.0.0.0", "10.0.0.2"] error_message = "Parsed `tcp` services should return 2 IP addresses, 10.0.0.0 and 10.0.0.2" } assert { condition = local_file.ip_addresses.filename == "./tcp.hcl" error_message = "Filename should include service kind `tcp`" } } run "get_http_services" { variables { service_kind = "http" } command = plan assert { condition = jsondecode(local_file.ip_addresses.content) == ["10.0.0.1"] error_message = "Parsed `http` services should return 1 IP address, 10.0.0.1" } assert { condition = local_file.ip_addresses.filename == "./http.hcl" error_message = "Filename should include service kind `http`" } }I set some mock values for a set of services in the services variable. The tests include command = plan to check attributes in the Terraform plan without applying any changes. As a result, the unit tests do not create the local file defined in the module. The example demonstrates positive testing, where I test the input works as expected. Terraform’s testing framework also supports negative testing, where you might expect a validation to fail for an incorrect input. Use the expect_failures attribute to capture the error. If you do not want to use the native testing framework in Terraform, you can use HashiCorp Sentinel, a programming language, or your configuration testing tool of choice to parse the plan representation in JSON and verify your Terraform logic. Besides testing attributes in the Terraform plan, unit tests can validate: Number of resources or attributes generated by for_each or count Values generated by for expressions Values generated by built-in functions Dependencies between modules Values associated with interpolated values Expected variables or outputs marked as sensitive If you wish to unit test infrastructure by simulating a terraform apply without creating resources, you can choose to use mocks. Some cloud service providers offer community tools that mock the APIs for their service offerings. Beware that not all mocks accurately reflect the behavior and configuration of their target API. Overall, unit tests run very quickly and provide rapid feedback. As an author of a Terraform module or configuration, you can use unit tests to communicate the expected values of configuration to other collaborators in your team and organization. Since unit tests run independently of infrastructure resources, they have a virtually zero cost to run frequently. Contract tests At the next level from the bottom of the pyramid, contract tests check that a configuration using a Terraform module passes properly formatted inputs. Contract tests answer the question, “Does the expected input to the module match what I think I should pass to it?” Contract tests ensure that the contract between a Terraform configuration’s expected inputs to a module and the module’s actual inputs has not been broken. Most contract testing in Terraform helps the module consumer by communicating how the author expects someone to use their module. If you expect someone to use your module in a specific way, use a combination of input variable validations, preconditions, and postconditions to validate the combination of inputs and surface the errors. For example, use a custom input variable validation rule to ensure that an AWS load balancer’s listener rule receives a valid integer range for its priority: variable "listener_rule_priority" { type = number default = 1 description = "Priority of listener rule between 1 to 50000" validation { condition = var.listener_rule_priority > 0 && var.listener_rule_priority < 50000 error_message = "The priority of listener_rule must be between 1 to 50000." } }As a part of input validation, you can use Terraform’s rich language syntax to validate variables with an object structure to enforce that the module receives the correct fields. This module example uses a map to represent a service object and its expected attributes: variable "services" { type = map(object({ node = string kind = string })) description = "List of services and their metadata" }In addition to custom validation rules, you can use preconditions and postconditions to verify specific resource attributes defined by the module consumer. For example, you cannot use a validation rule to check if the address blocks overlap. Instead, use a precondition to verify that your IP addresses do not overlap with networks in HashiCorp Cloud Platform (HCP) and your AWS account: resource "hcp_hvn" "main" { hvn_id = var.name cloud_provider = "aws" region = local.hcp_region cidr_block = var.hcp_cidr_block lifecycle { precondition { condition = var.hcp_cidr_block != var.vpc_cidr_block error_message = "HCP HVN must not overlap with VPC CIDR block" } } }Contract tests catch misconfigurations in modules before applying them to live infrastructure resources. You can use them to check for correct identifier formats, naming standards, attribute types (such as private or public networks), and value constraints such as character limits or password requirements. If you do not want to use custom conditions in Terraform, you can use HashiCorp Sentinel, a programming language, or your configuration testing tool of choice. Maintain these contract tests in the module repository and pull them into each Terraform configuration that uses the module using a CI framework. When someone references the module in their configuration and pushes a change to version control, the contract tests run against the plan representation before you apply. Unit and contract tests may require extra time and effort to build, but they allow you to catch configuration errors before running terraform apply. For larger, more complex configurations with many resources, you should not manually check individual parameters. Instead, use unit and contract tests to quickly automate the verification of important configurations and set a foundation for collaboration across teams and organizations. Lower-level tests communicate system knowledge and expectations to teams that need to maintain and update Terraform configuration. Integration tests With lower-level tests, you do not need to create external resources to run them, but the top half of the pyramid includes tests that require active infrastructure resources to run properly. Integration tests check that a configuration using a Terraform module passes properly formatted inputs. They answer the question, “Does this module or configuration create the resources successfully?” A terraform apply offers limited integration testing because it creates and configures resources while managing dependencies. You should write additional tests to check for configuration parameters on the active resource. In my example, I add a new terraform test to apply the configuration and create the file. Then, I verify that the file exists on my filesystem. The integration test creates the file using a terraform apply and removes the file after issuing a terraform destroy. run "check_file" { variables { service_kind = "tcp" } command = apply assert { condition = fileexists("${var.service_kind}.hcl") error_message = "File `${var.service_kind}.hcl` does not exist" } }Should you verify every parameter that Terraform configures on a resource? You could, but it may not be the best use of your time and effort. Terraform providers include acceptance tests that resources properly create, update, and delete with the right configuration values. Instead, use integration tests to verify that Terraform outputs include the correct values or number of resources. They also test infrastructure configuration that can only be verified after a terraform apply, such as invalid configurations, nonconformant passwords, or results of for_each iteration. When choosing an integration testing framework outside of terraform test, consider the existing integrations and languages within your organization. Integration tests help you determine whether or not to update your module version and ensure they run without errors. Since you have to set up and tear down the resources, you will find that integration tests can take 15 minutes or more to complete, depending on the resource. As a result, implement as much unit and contract testing as possible to fail quickly on wrong configurations instead of waiting for resources to create and delete. End-to-end tests After you apply your Terraform changes to production, you need to know whether or not you’ve affected end-user functionality. End-to-end tests answer the question, “Can someone use the infrastructure system successfully?” For example, application developers and operators should still be able to retrieve a secret from HashiCorp Vault after you upgrade the version. End-to-end tests can verify that changes did not break expected functionality. To check that you’ve upgraded Vault properly, you can create an example secret, retrieve the secret, and delete it from the cluster. I usually write an end-to-end test using a Terraform check to verify that any updates I make to a HashiCorp Cloud Platform (HCP) Vault cluster return a healthy, unsealed status: check "hcp_vault_status" { data "http" "vault_health" { url = "${hcp_vault_cluster.main.vault_public_endpoint_url}/v1/sys/health" } assert { condition = data.http.vault_health.status_code == 200 || data.http.vault_health.status_code == 473 error_message = "${data.http.vault_health.url} returned an unhealthy status code" } }Besides a check block, you can write end-to-end tests in any programming language or testing framework. This usually includes an API call to check an endpoint after creating infrastructure. End-to-end tests usually depend on an entire system, including networks, compute clusters, load balancers, and more. As a result, these tests usually run against long-lived development or production environments. Testing Terraform modules When you test Terraform modules, you want enough verification to ensure a new, stable release of the module for use across your organization. To ensure sufficient test coverage, write unit, contract, and integration tests for modules. A module delivery pipeline starts with a terraform plan and then runs unit tests (and if applicable, contract tests) to verify the expected Terraform resources and configurations. Then, run terraform apply and the integration tests to check that the module can still run without errors. After running integration tests, destroy the resources and release a new module version. The Terraform Cloud private registry offers a branch-based publishing workflow that includes automated testing. If you use terraform test for your modules, the private registry automatically runs those tests before releasing a module. When testing modules, consider the cost and test coverage of module tests. Conduct module tests in a different project or account so that you can independently track the cost of your module testing and ensure module resources do not overwrite environments. On occasion, you can omit integration tests because of their high financial and time cost. Spinning up databases and clusters can take half an hour or more. When you’re constantly pushing changes, you might even create multiple test instances. To manage the cost, run integration tests after merging feature branches and select the minimum number of resources you need to test the module. If possible, avoid creating entire systems. Module testing applies mostly to immutable resources because of its create and delete sequence. The tests cannot accurately represent the end state of brownfield (existing) resources because they do not test updates. As a result, it provides confidence in the module’s successful usage but not necessarily in applying module updates to live infrastructure environments. Testing Terraform configuration Compared to modules, Terraform configuration applied to environments should include end-to-end tests to check for end-user functionality of infrastructure resources. Write unit, integration, and end-to-end tests for configuration of active environments. The unit tests do not need to cover the configuration in modules. Instead, focus on unit testing any configuration not associated with modules. Integration tests can check that changes successfully run in a long-lived development environment, and end-to-end tests verify the environment’s initial functionality. If you use feature branching, merge your changes and apply them to a production environment. In production, run end-to-end tests against the system to confirm system availability. Failed changes to active environments will affect critical business systems. In its ideal form, a long-running development environment that accurately mimics production can help you catch potential problems. From a practical standpoint, you may not always have a development environment that fully replicates a production environment because of cost concerns and the difficulty of replicating user traffic. As a result, you usually run a scaled-down version of production to save money. The difference between development and production will affect the outcome of your tests, so be aware of which tests may be more important for flagging errors or disruptive to run. Even if configuration tests have less accuracy in development, they can still catch a number of errors and help you practice applying and rolling back changes before production. Conclusion Depending on your system’s cost and complexity, you can apply a variety of testing strategies to Terraform modules and configuration. While you can write tests in your programming language or testing framework of choice, you can also use the testing frameworks and constructs built into Terraform for unit, contract, integration, and end-to-end testing. Test type Use case Terraform configuration Unit test Modules, configuration terraform test Contract test Modules Input variable validation Preconditions/postconditions Integration test Modules, configuration terraform test End-to-end test Configuration Check blocks This post has explained the different types of tests and how you can apply them to catch errors in Terraform configurations and modules before production, and how to incorporate them into pipelines. Your Terraform testing strategy does not need to be a perfect test pyramid. At the very least, automate some tests to reduce the time you need to manually verify changes and check for errors before they reach production. Check out our tutorial on how to Write Terraform tests to learn about writing Terraform tests for unit and integration testing and running them in the Terraform Cloud private module registry. For more information on using checks, Use checks to validate infrastructure offers a more in-depth example. If you want to learn about writing tests for security and policy, review our documentation on Sentinel. View the full article
-
hashicorp HashiCorp 2023 year in review: Community
Hashicorp posted a topic in Infrastructure-as-Code
2023 marked a return to many of the rhythms of connection with the HashiCorp community, including in-person events around the world as well as a full slate of virtual talks and presentations. In addition, we set new records for community downloads and expanded our partner ecosystem. And 2023 also saw the HashiCorp certifications program accelerate into new territory. A year of community and events There were many event highlights for the HashiCorp community in 2023, and here are a few we think are especially important. \ \ First, we’re excited about having more than 27,000 combined attendees and visitors at our HashiConf event in San Francisco in October and the HashiCorp booth at AWS re:Invent in Las Vegas at the end of November. Back in June, we hosted HashiDays events online and in London, Paris, and Munich (find one near you or reach out to us about starting your own HUG). We love the chance to connect with the community in person, and the more the merrier. We also are proud to count more than 180 HashiCorp User Groups (HUGs) in 63 countries, comprising nearly 51,000 members. We also produced a digital conference series, HashiTalks, spanning the globe and reaching people in 10 different languages. We had the opportunity to engage with our 111 HashiCorp Ambassadors in person and online throughout the year to share sneak previews of our roadmap and plans while gathering feedback on how to make our products better for everyone. We celebrated contributions from more than 100 Core Contributors. A year of learning and achievement Of course, community is more than just get-togethers, it’s also about learning and sharing information. We now have four HashiCorp Cloud Engineer Certifications available, including Terraform Associate 003, Vault Associate 002, Vault Operations Professional, and Consul Associate 002. Consul Associate 003 will be available in 2024. In 2023, more than 24,000 certification exams were taken by community members in 108 countries. At HashiConf alone, 178 people took a certification exam and we got some great feedback on an early version of the Terraform Professional exam. To break it down a bit, Terraform Associate was the most awarded certification, driven in part by the launch of the new Terraform Associate 003 certification for cloud engineers specializing in operations, IT, or development. Of the folks who achieved the Vault Professional certification, 84% were upskilling from the Vault Associate certification they had previously earned. To date, we’ve awarded more than 51,000 total certifications. We’re plenty proud of all the folks who earned certifications this year, and so are they! On average, people who have earned a HashiCorp certification have shared their badges with up to three other people and encouraged/challenged their friends to get certified as well. We also want to give a shout out to all the folks who helped develop and QA our certification exams with their expertise, including the 109 beta and alpha testers, the 56 subject matter experts who worked on the associate certification exams, and the 39 who contributed to the professional exams. A year of software downloads It’s also important to note the acceleration in community creation and use of HashiCorp software and integrations. There were some 487 million community downloads this year, and HashiCorp community software was downloaded or used in 85.8% of the Fortune 500. Equally impressive, as of October 2023, the Terraform Registry holds more than 3,500 providers and 15,000 modules. The partner ecosystem now holds more than 3,000+ providers and integrations. A fond goodbye to Mitchell Finally, we can’t talk about the year in the HashiCorp community without acknowledging the bittersweet departure of HashiCorp Co-Founder Mitchell Hashimoto. No one has had more influence on the HashiCorp community than Mitchell, and we are all eternally grateful for his many essential contributions. We’re definitely going to miss him, but we’re also excited to learn about what’s next for him — and for the entire HashiCorp community. On to 2024 2023 has seen even more engagement and growth from the HashiCorp community than in years past, and we hope to accelerate that even more in the years to come. As in previous years, we build our tools with your needs in mind and remain deeply thankful for the opportunity to help all of you in your learning journey around HashiCorp’s infrastructure and security solutions. Please stay tuned and engage with us as we work to make our tools even better for you to use to solve your most important technical and business challenges. Thank you for your continued trust in allowing us to help solve your cloud infrastructure needs and the feedback you share to improve everyone’s experience in making our tools better. View the full article -
Be sure to also read our 2023 product innovation year in review blog. 2023 was another great year of growing our relationships both with customers and ecosystem partners. Our list of customers around the world, including many well-known names, continued to grow, as did the stories from those customers about how working with HashiCorp addressed their most pressing infrastructure and security issues. In addition, our partner ecosystem and cooperation also expanded, as we handed out and received awards that demonstrated our commitment to working with some of the most innovative and responsive companies in the industry. Here are some of this year’s highlights we’d like to celebrate: HashiCorp now has more than 4,300 customers Every year we are humbled and energized by the amount of trust the world’s leading companies place in our products — relying on them to run thousands of systems that millions of people use on every day (often without even noticing it). Those companies increasingly include some of the biggest, best-known firms in the world. In fact, over 200 of our more than 4,300 current customers are in the Fortune 500, and 472 are in the Global 2000. Even more than the numbers though, we are inspired by the stories our customers tell about how working with HashiCorp made a huge difference. Case in point: Home Depot, a Fortune 10 company, had its Director of Cloud Engineering and Enablement, Evan Wood, join us onstage at HashiConf this year to talk about Terraform and platform team enablement. Evan’s story was just one of many new case studies and customer sessions that we shared this year. For example, Deutsche Bank established more than 200 cloud landing zones for 3,000+ developers with Terraform. The Innovation Labs at telecom leader Verizon are using Boundary to provide secure MEC access to collaborate with external partners. The world’s largest job site, Indeed.com, uses HashiCorp Vault as a resilient, reliable platform to deliver millions of secrets every day to globally distributed workloads. HashiCorp Nomad powers Epic Games’ Fortnite creator ecosystem. And Canadian payments processor Interac uses HashiCorp solutions to help modernize its infrastructure and enable billions of secure, irrevocable, near real-time nationwide transactions each year. In other important customer-related news from 2023, in July Susan St. Ledger joined HashiCorp as President, Worldwide Field Operations. With deep, successful experience at Okta, Splunk, Salesforce, and Sun Microsystems, Susan oversees all aspects of the customer journey to help maximize the value they receive from the HashiCorp product suite — from initial deployment to customer and partner success, renewal, and expansion. To learn more about HashiCorp customers relevant to your world, filter our by case study library by industry. Recognition and relationships with cloud and technology partners Of course, we couldn’t deliver so much value to our customers without the essential contributions of our technology and cloud service providers dedicated to our mutual success. That’s why we’re so proud to have been honored with awards from a wide variety of ecosystem partners, including: AWS Global Collaboration Partner of the Year award Datadog Partner Network Integration Developer Partner of the Year award Microsoft Global OSS on Azure Partner of the Year Palo Alto Networks 2023 Global Technology Partner of the Year award Our collaborative development efforts with partners continue to produce valuable integrations. Last year, HashiCorp and AWS celebrated a landmark milestone — 1 billion Terraform AWS provider downloads. This year, with a major new release, the AWS provider for Terraform was downloaded another billion times, surpassing 2 billion total downloads. And that’s only part of the story. In 2023, we worked with Amazon Web Services on everything from self-service provisioning to fighting secrets sprawl — and we were an Emerald Sponsor of AWS re:Invent. Of particular interest was the launch of AWS Service Catalog support for Terraform Community and Terraform Cloud, allowing administrators to curate a portfolio of pre-approved Terraform configurations on AWS Service Catalog. We also had a major new release of the Google Cloud Terraform provider, which had two-thirds of its all-time downloads happen in 2023. These milestones underscore how the need for standardized infrastructure as code solutions is accelerating. We also made significant updates to the ServiceNow Terraform Catalog and introduced the ServiceNow Service Graph Connector, making it easier for ServiceNow customers to provision and track infrastructure state with Terraform. Notably, one of our key focus areas is working with partners on AI for infrastructure management. On AWS, customers can accelerate Terraform development with Amazon CodeWhisperer. In December, HashiCorp and Google Cloud announced an extension of our partnership to advance product offerings with generative AI. And on Microsoft Azure, customers can use Azure DevOps and GitHub Copilot to simplify and accelerate development of AI apps. Finally, we’d be remiss if we didn’t recognize the incredible contributions of our vibrant ecosystem of technology and systems integrator partners. In particular, we want to congratulate our technology partner winners Palo Alto Networks, Datadog, Zscaler, and Tines and all 13 regional and global systems integrator winners. Looking forward to even more rewarding partnerships in 2024 We recognize the incredible responsibility of building software that underpins so much of the world’s IT infrastructure. For 2024, we’ll continue to work with our partners to build more great integrations and synergies to help our customers succeed. After all, we consider our customers partners, too, as we help them deliver great experiences to their end users. We can’t wait to see what we build together in 2024. View the full article
-
Enterprises leverage Public key infrastructure (PKI) to encrypt, decrypt, and authenticate information between servers, digital identities, connected devices, and application services. PKI is used to establish secure communications to abate risks to data theft and protect proprietary information as organizations increase their reliance on the internet for critical operations. This post will explore how public key cryptography enables related keys to encrypt and decrypt information and guarantee the integrity of data transfers... View the full article
-
In the ever-evolving landscape of IT infrastructure, the ability to create custom images efficiently and consistently is a game-changer. This is where HashiCorp Packer comes into play, a powerful tool that revolutionizes the image creation process across platforms such as AWS, Azure, and GCP, among others. This blog post, based on the Hashicorp Packer course offered by KodeKloud, serves as a comprehensive guide to mastering HashiCorp Packer... View the full article
-
We are delighted to announce the winners of the 2023 HashiCorp Partner Network Awards for Systems Integrators (SIs). The awards honor top-performing partners that have demonstrated excellence in sales and services for HashiCorp solutions. They celebrate our partners’ commitment to helping enterprises accelerate their cloud adoption strategies. Each of these winners has delivered significant technology capabilities relating to the HashiCorp portfolio, assisted with growing the capabilities of HashiCorp’s products, and consistently delivered value to our joint customers through services and innovation initiatives... View the full article
-
HashiCorp’s vibrant ecosystem incorporates hundreds of partners that help address specific customer challenges collaboratively and efficiently. At the HashiConf conference in San Francisco today, we are announcing the winners of the 2023 HashiCorp Technology Partner Awards. The Technology Partner Awards celebrate HashiCorp technology partners who have prioritized customer solutions around the HashiCorp Cloud Platform (HCP) framework, simplifying customer implementations through new integrations, co-engineered solutions, and participation in joint marketing initiatives. Our ecosystem partners are foundational to the success of HashiCorp, and we are thrilled to recognize their contributions... View the full article
-
Forum Statistics
63.6k
Total Topics61.7k
Total Posts