Jump to content

Search the Community

Showing results for tags 'automation'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 12 results

  1. In response to the scale and complexity of modern cloud-native technology, organizations are increasingly reliant on automation to properly manage their infrastructure and workflows. DevOps automation eliminates extraneous manual processes, enabling DevOps teams to develop, test, deliver, deploy, and execute other key processes at scale. Automation thus contributes to accelerated productivity and innovation across the organization. Automation can be particularly powerful when applied to DevOps workflows. According to the Dynatrace 2023 DevOps Automation Pulse report, an average of 56% of end-to-end DevOps processes are automated across organizations of all kinds. However, despite the rising popularity of DevOps automation, the maturity levels of that automation vary from organization to organization. These discrepancies can be a result of toolchain complexity (with 53% of organizations struggling in this area), siloed teams (46%), lack of resources (44%), cultural resistance (41%), and more. Understanding exactly where an organization’s automation maturity stands is key to advancing to the next level. Armed with this knowledge, organizations can systematically address their weaknesses and specifically determine how to improve these areas. For this reason, teams need a comprehensive evaluation to assess their implementation of numerous facets of DevOps automation. The DevOps Automation Assessment is a tool to help organizations holistically evaluate their automation maturity and make informed strides toward the next level of DevOps maturity. How the DevOps automation assessment works The DevOps automation assessment consists of 24 questions across the following four key areas of DevOps: Automation governance: The automation governance section deals with overarching, organization-wide automation practices. It addresses the extent to which an organization prioritizes automation efforts, including budgets, ROI models, standardized best practices, and more. Development & delivery automation: This section addresses the extent to which an organization automates processes within the software development lifecycle (SDLC), including deployment strategies, configuration approaches, and more. Operations automation: The operations section addresses the level of automation organizations use in maintaining and managing existing software. It explores infrastructure provisioning, incident management, problem remediation, and other key practices. Security automation: The final section addresses how much automation an organization uses when mitigating vulnerabilities and threats. It includes questions relating to vulnerability prioritization, attack detection and response, application testing, and other central aspects of security. This comprehensive assessment provides maturity levels for each of these four areas, offering a nuanced understanding of where an organization’s automation maturity stands. Since teams from one functional area to another may be siloed, a respondent who is not knowledgeable on the automation practices of a certain area can still obtain insights by answering the questions that pertain to their team’s responsibilities. Scoring The questions are both quantitative and qualitative, each one addressing key determinants of DevOps automation maturity. Examples of qualitative questions include: How is automation created at your organization? What deployment strategies does your organization use? By contrast, the quantitative questions include: What proportion of time do software engineering and development teams spend writing automation scripts? How long do you estimate it takes to remediate a problem within one of your production applications? The tool assigns every response to a question a unique point value. Based on the total for each section, the assessment determines and displays an organization’s maturity levels in the four key areas. The maturity levels The DevOps Automation Assessment evaluates each of the four key areas according to the following four maturity levels. Foundational: Foundational is the most basic level of automation maturity. At this level, automation practices are either non-existent or elementary, and not adding significant value to DevOps practices. Organizations at this maturity level should aim to build a strong automation foundation, define automation principles, and lay the groundwork for a more mature automation framework. Standardized: At the standardized level, automation has become more integrated into key DevOps processes. This includes expediting workflows, ensuring consistency, and reducing manual effort to a modest degree. The goal of organizations at this maturity level should be to achieve a higher level of automation integration and synergy between different stages of the DevOps lifecycle. Advanced: Once an organization’s automation maturity reaches the advanced level, its automation practices are integrated across the SDLC and assist greatly in scaling and executing DevOps processes. Organizations at this maturity level should strive to improve operational excellence by adopting AI analysis into their automation-driven practices. Intelligent: To reach the intelligent level, automation must be wholly reliable, sophisticated, and ingrained within organizational culture. At this level, organizations are leveraging artificial intelligence and machine learning (AI/ML) to bolster their automation practices. The goal of organizations at this maturity level should be to achieve higher levels of efficiency, agility, and innovation through intelligent, AI-driven automation practices. The tool calculates a separate DevOps maturity level for each of the four individual sections (governance, development & delivery, operations, and security). For example, a respondent could hypothetically receive a foundational ranking for automation governance, advanced for development & delivery, intelligent for operations, and standardized for security. Next steps: Using the results to advance DevOps automation maturity While it is helpful to understand an organization’s automation maturity level in key DevOps areas, the information is only useful if teams can leverage it for improvement. Even at the highest level of automation maturity, there is always room to continually improve automation practices and capabilities as underlying technologies and the teams that use them evolve. But how exactly can an organization advance its automation maturity to the next level? The DevOps Automation Pulse provides ample guidance and actionable steps for every level of automation maturity. For example, to progress from standardized to advanced, the report recommends that organizations implement a single source of reliable observability data to prioritize alerts continuously and automatically. Or, to progress from advanced to intelligent, the report encourages organizations to introduce AI/ML to assist continuous and automatic security processes, including vulnerability detection, investigation, assignments, remediation verification, and alert prioritization. What’s more, teams can advance workflow automation further by adopting unified observability and log data collection, along with different forms of AI (predictive, causal, generative) for automated analysis. All of these recommendations and more in addition to the latest insights on the current state of DevOps automation are available in the DevOps Automation Pulse. Start the journey toward greater DevOps automation With consumer demands for quality and speed at unprecedented levels, DevOps automation is essential in organizations of all sizes and sectors. An organization’s automation maturity level may often be the determining factor in whether it pulls ahead of or falls behind the competition. Once at a mature level, organizations with automated workflows, repeatable tasks, and other DevOps processes can not only exponentially accelerate business growth but also improve employee satisfaction and productivity. The first step toward achieving these benefits is understanding exactly where an organization’s current automation maturity level stands. This knowledge empowers teams to embrace an informed and systematic approach to further mature their organization’s automation maturity. Discover your organization’s automation maturity levels by taking the DevOps Automation Assessment. For more actionable insights, download the 2023 DevOps Automation Pulse report, a comprehensive guide on the current state of DevOps automation and how organizations can overcome persistent challenges. Download the report The post The State of DevOps Automation assessment: How automated are you? appeared first on Dynatrace news. View the full article
  2. As organisations embark on becoming more digitally enabled, the road is paved with many twists and turns. Data science teams and analysts, as well as the teams they serve, know full well that the path to analytic excellence is not linear. The good news is organisations have opportunities to unlock value at each step along the way. The pattern by which companies develop strength within their data is highly repeatable, underpinned by more ways to manipulate data and unlock the benefits of automation. While automation in and of itself isn’t digital transformation, since new processes are not being launched, it frequently delivers huge value and lays the framework for organizations to make major operational improvements. With automation in place, organizations can harness more analytical approaches with modelling enhanced by AI and ML. Once these core capabilities move out of the sole domain of technical IT teams and are put into the hands of more domain experts, true transformation of business process occurs and more overall value is derived from analytics. Delivering value from the start Automation is typically one of the earliest steps in overhauling enterprise analytics. In my experience, this step won’t deliver as much value as those that follow – but it’s still significant and, beyond that, vital. Let’s take a large manufacturer automating its VAT tax recovery process as an example. While some might assume that this type of automation simply saves time, many companies are not recovering 100% of their VAT because the manual, legacy process has a cost, and if the VAT is below a given value, it might not be worth the recovery. When this process is automated, 100% VAT recovery yields become possible – the hard cash savings for the business can’t be ignored. Finance teams can automate many of the manual processes required to close their books each quarter, reducing the time it takes to close from a matter of weeks to days. Audit teams can upgrade from manual audits repeated every couple of years to continuous audits which check for issues daily and report any issues automatically and instantly. From reducing cost and risk to increasing revenue and saving time for employees (your greatest asset), automation is having a huge impact on organizations around the globe. With this lens, it’s evident that automation amounts to much more than time savings. Two varying approaches There are two very different approaches that organizations have historically taken to drive automation. The first, which has a more limited impact, is to form a centralized team and have that small team attempt to automate processes around the business. The second approach is to upscale employees to allow every worker to be capable of automating a process. This latter approach can scale at a very different pace and impact. Organizations can upskill tens of thousands of employees and automate millions of manual processes. This would be very difficult with a small team trying to perform the same automation. It can lead to substantial business benefits, including increased productivity, reduced costs and greater revenue. Historically, of course, the latter approach has also been nigh on impossible to execute – given the requirement for familiarity with coding language to use code-heaving technologies. But that was then – today, when mature low-code systems present a massive opportunity to upskill employees to automate processes simply by asking the right questions. This isn’t simply an alternative route – it should be the only route for organizations that are serious about achieving analytical excellence. Code-free platforms remove the need for departments to wait in queues for the IT teams to deliver an application that fits their needs. It puts the power of automated analytical and development capabilities into the hands of business domain experts with the specific expertise needed to get valuable insight from analytics quicker. Therefore, upskilling efforts need to be directed towards making such a broad data-literate culture possible. Providing teams with automation tools For many organisations, a common strategy for driving upskilling and capability is to focus on its new employees. With attrition and growth rates at many businesses ranging between 5 and 10%, organisations can face the challenge of replacing as much as a quarter of their entire team moving on every 18 months. Providing training and technology for inevitable new joiners to automate processes is therefore essential for every department to cut time to drive efficiencies and upskill the overall workforce base. This is already taking place within the education sector, with many schools beginning to implement automation technologies and analytic techniques in their curriculum, particularly in business schools and accounting, marketing and supply chain courses. Businesses that do not take notice and look to prioritize these skills as well will likely not only continue to suffer from the inefficiencies of manual processes but could also risk the attrition cost of failing to provide their employees with the modern tools that are being taught in the base curriculums of these degree programs. Automation is the first step towards analytics excellence, but its relevance doesn’t stop there. It’s through automation that leaders can unlock clear, traceable benefits for their organizations in terms of overhauled processes as well as setting them on the right path when it comes to upskilling and democratizing data. We've listed the best online courses and online class sites. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro View the full article
  3. In the previous blog post of this series, we discussed the crucial role of Dynatrace as an orchestrator that steps in to stop the testing phase in case of any errors. Additionally, Dynatrace equips SREs and application teams with valuable insights powered by Davis® AI. In this blog post of the series, we will explore the use of Site Reliability Guardian (SRG) in more detail. SRG is a potent tool that automates the analysis of release impacts, ensuring validation of service availability, performance, and capacity objectives throughout the application ecosystem by examining the effect of advanced test suites executed earlier in the testing phase. Validation stage overview The validation stage is a crucial step in the CI/CD (Continuous Integration/Continuous Deployment) process. It involves carefully examining the test results from the previous testing phase. The main goal of this stage is to identify and address any issues or problems that were detected. Doing so reduces the risk of production disruptions and instills confidence in both SREs (Site Reliability Engineers) and end-users. Depending on the outcome of the examination, the build is either approved for deployment to the production environment or rejected. Challenges of the validation stage In the Validation phase, SREs face specific challenges that significantly slow down the CI/CD pipeline. Foremost among these is the complexity associated with data gathering and analysis. The burgeoning reliance on cloud technology stacks amplifies this challenge, creating hurdles due to budgetary constraints, time limitations, and the potential risk of human errors. Additionally, another pivotal challenge arises from the time spent on issue identification. Both SREs and application teams invest substantial time and effort in locating and rectifying software glitches within their local environments. These prolonged processes not only strain resources but also introduce delays within the CI/CD pipeline, hampering the timely release of new features to end-users. Mitigate challenges with Dynatrace With the support of Dynatrace Grail™, AutomationEngine, and the Site Reliability Guardian, SREs and application teams are assisted in making informed release decisions by utilizing telemetry observability and other insights. Additionally, the Visual Resolution Path within generated problem reports helps in reproducing issues in their environments. The Visual Resolution Path offers a chronological overview of events detected by Dynatrace across all components linked to the underlying issue. It incorporates the automatic discovery of newly generated compute resources and any static resources that are in play. This view seamlessly correlates crucial events across all affected components, eliminating the manual effort of sifting through various monitoring tools for infrastructure, process, or service metrics. As a result, businesses and SREs can redirect their manual diagnostic efforts toward fostering innovation. Configure an action for the Site Reliability Guardian in the workflow. The action should focus on validating the guardian’s adherence to the application ecosystem’s specific objectives (SLOs). Additionally, align the action’s validation window with the timeframe derived from the recently completed test events. As the action begins, the Site Reliability Guardian (SRG) evaluates the set objective by analyzing the telemetry data produced during advanced test runs. At the same time, SRG uses DAVIS_EVENTS to identify any potential problems which could result in one of two outcomes. Outcome #1: Build promotion Once the newly developed code is in line with the objectives outlined in the Guardian—and assuming that Davis AI doesn’t generate any new events—the SRG action activates the successful path in the workflow. This path includes a JavaScript action called promote_jenkins_build, which triggers an API call to approve the build being considered, leading to the promotion of the build deployment to production. Outcome #2: Build rejection If Davis AI generates any issue events related to the wider application ecosystem or if any of the objectives configured from the defined guardian are not met, the build rejection workflow is automatically initiated. This triggers the disapprove_jenkins_build JavaScript action, which leads to the rejection of the build. Moreover, by utilizing helpful service analysis tools such as Response Time Hotspots and Outliers, SREs can easily identify the root cause of any issues and save considerable time that would otherwise be spent on debugging or taking necessary actions. SREs can also make use of the Visual Resolution Path to recreate the issues on their setup or identify the events for different components that led to the issue. In both scenarios, a Slack message is sent to the SREs and the impacted app team, capturing the build promotion or rejection.The telemetry data’s automated analytics, powered by SRG and Davis AI, simplify the process of promoting builds. This approach effectively tackles the challenges that come with complex application ecosystems. Additionally, the integration of service tools and Visual Resolution Path helps to identify and fix issues more quickly, resulting in an improved mean time to repair (MTTR). Validation in the platform engineering context Dynatrace—essential within the realm of platform engineering—streamlines the validation process, providing critical insights into performance metrics and automating the identification of build failures. By leveraging SRG and Visual Resolution Path, along with Davis AI causal analysis, development teams can quickly pinpoint issues, and further rectify them ensuring a fail-smart approach. The integration of service analysis tools further enhances the validation phase by automating code-level inspections and facilitating timely resolutions. Through these orchestrated efforts, platform engineering promotes a collaborative environment, enabling more efficient validation cycles and fostering continuous enhancement in software quality and delivery. In conclusion, the integration of Dynatrace observability provides several advantages for SREs and DevOps, enabling them to enhance the key DORA metrics: Deployment Frequency: Improved deployment rate through faster and more informed decision-making. SREs gain visibility into each stage, allowing them to build faster and promptly address issues using the Dynatrace feature set. Change Lead Time: Enhanced efficiency across stages with Dynatrace observability and security tools, leading to quicker postmortems and fewer interruption calls for SREs. Change Failure Rate: Reduction in incidents and rollbacks achieved by utilizing “Configuration Change” events or deployment and annotation events in Dynatrace. This enables SREs to allocate their time more effectively to proactively address actual issues instead of debugging underlying problems. Time to restore service: While these proactive approaches can help improve Deployment Frequency and Change Lead Time, telemetry observability data with Dynatrace AI causation engine Davis AI can aid in improving Time to restore service. In addition, Dynatrace can leverage the events and telemetry data that it receives during the Continuous Integration/Continuous Deployment (CI/CD) pipeline to construct dashboards. By using JavaScript and DQL, these dashboards can help generate reports on the current DORA metrics. This method can be expanded to gain a better understanding of the SRG executions, enabling us to pinpoint the responsible guardians and the SLOs managed by various teams and identify any instances of failure. Addressing such failures can lead to improvements and further enhance the DORA metrics. Below is a sample dashboard that provides insights into DORA and SRG execution. In the next blog post, we’ll discuss the integration of security modules into the DevOps process with the aim of achieving DevSecOps. Additionally, we’ ll explore the incorporation of Chaos Engineering during the testing stage to enhance the overall reliability of the DevSecOps cycle. We’ll ensure that these efforts don’t affect the Time to Restore Service turnaround build time and examine how we can improve the fifth key DORA metric, Reliability. What’s next? Curious to see how it all works? Contact us to schedule a demo and we’ll walk you through the various workflows, JavaScript tasks, and the dashboards discussed in this blog series. Contact us to schedule a demo and we’ll walk you through the various workflows, JavaScript tasks, and the dashboards discussed in this blog series. Contact Sales If you’re an existing Dynatrace Managed customer looking to upgrade to Dynatrace SaaS, see How to start your journey to Dynatrace SaaS. The post Automate CI/CD pipelines with Dynatrace: Part 4, Validation stage appeared first on Dynatrace news. View the full article
  4. ETL, or Extract, Transform, Load, serves as the backbone for data-driven decision-making in today's rapidly evolving business landscape. However, traditional ETL processes often suffer from challenges like high operational costs, error-prone execution, and difficulty scaling. Enter automation—a strategy not merely as a facilitator but a necessity to alleviate these burdens. So, let's dive into the transformative impact of automating ETL workflows, the tools that make it possible, and methodologies that ensure robustness. The Evolution of ETL Gone are the days when ETL processes were relegated to batch jobs that ran in isolation, churning through records in an overnight slog. The advent of big data and real-time analytics has fundamentally altered the expectations from ETL processes. As Doug Cutting, the co-creator of Hadoop, aptly said, "The world is one big data problem." This statement resonates more than ever as we are bombarded with diverse, voluminous, and fast-moving data from myriad sources. View the full article
  5. Andrew Lennox is a passionate IT professional responsible for implementing strategic and transformational initiatives to support business development.View the full article
  6. While customers use observability platforms to identify issues in cloud environments, the next chapter of value is removing manual work from processes and building IT automation into their organizations. At the Dynatrace Innovate conference in Barcelona, Bernd Greifeneder, Dynatrace chief technology officer, discussed key examples of how the Dynatrace observability platform delivers value well beyond traditional monitoring. While the Dynatrace observability platform has long delivered return on investment for customers in identifying root cause, it has evolved to address far more use cases for observability and security, to address customers’ specific needs. Bernd Greifeneder outlines key use cases for IT automation at Dynatrace Innovate. “How do we make the data accessible beyond the use cases that Dynatrace gives you, beyond observability and security?” Greifeneder said. “You have business use cases that are unique to your situation that we can’t anticipate.” Greifeneder noted that now, with Dynatrace AutomationEngine, which triggers automated workflows, teams can mature beyond executing tasks manually. They can be proactive in identifying issues in cloud environments. “It allows you to take answers into automation, into action, whether It’s predictive or reactive,” Greifeneder explained. The road to modern observability As organizations continue to operate in the cloud, they discover that cloud observability becomes paramount to their ability to run efficiently and securely. Observability can help them identify problems in their cloud environments and provide information about the precise root cause of a problem. For Dynatrace customers, identifying root cause comes via Davis AI, the Grail data lakehouse, which unifies data silos, and Smartscape, which topologically maps all cloud entities and relationships. As a result, customers can identify the precise source of issues in their multi- and hybrid cloud environments. But as Greifeneder noted, “just having the answers alone isn’t enough.” Organizations need to incorporate IT automation into their processes to take action on the data they gather. As customers continue to operate in the cloud, their needs for efficiency and cost-effectiveness only grow. As a result, they need to build in greater efficiency through IT automation. Data suggests that companies have come to recognize the importance of building in this efficiency. According to Deloitte’s global survey of executives, 73% of respondents said their organizations have embarked on a path to intelligent automation. “That’s why we added to Dynatrace AutomationEngine, to run workflows, or use code to automate,” Greifeneder explained. “It allows you to take answers into automation, into action, whether It’s predictive or reactive. Both are possible, tied to business or technical value–It’s all there,” he said. This is precisely how executives anticipate getting value out of IT in the coming years. Indeed, according to a Gartner 2022 survey, 80% of executives believe that AI can be applied to any business decision. Three use cases for IT automation and optimization: How Dynatrace uses Dynatrace 1. Developer observability as a service. For developers, IT automation can bring significant productivity boosts in software development. According to one recent survey, 71% of respondents said that building out developer automation was a key initiative. With Grail, for example, a DevOps team can pre-scan logs. The plattorm can pre-analyze and classify logs . With this process, DevOps teams can identify whether code includes a high-priority bug that has to be fixed immediately. By taking this approach, Dynatrace itself reduced bugs by 36% before flaws arrived in production-level code. 2. Security. As security threats continue to mount, it’s impossible for teams to identify threats with manual work alone. “Security has to be automated,” As Greifeneder noted, identifying threats quickly is critical to prevent applications and data from being compromised. “When there is something urgent, every second matters,” Greifeneder said. He noted that Dynatrace teams used the platform to reduce time spent identifying and resolving security issues. By unifying all relevant events in Grail, teams could identify suspicious activity, then have the platform automatically trigger the steps to analyze those activities. Next, the platform can automatically classify activities that needed immediate action or route information to the right team to take action. As Greifeneder noted, Dynatrace teams reduced the entire process of identifying and addressing security vulnerabilities from days to minutes. “This is massive improvement of security and massive productivity gain.” Greifeneder noted. Moreover, data is scattered and needs to be unified. “All security data is in silos.” The goal for modern observability is to automate the process of identifying suspicious activity. The Dynatrace platform uses AutomationEngine and workflows, automatically triggering steps to analyze threats. 3. Data-driven cloud optimization. In this use case, Greifeneder said, the goal was to optimize the Dynatrace platform itself and make it “as performant and cost-effective” for customers as possible. The Dynatrace team gathered cloud billing data, infrastructure data, networking data, and analyzed that data in Dynatrace Notebooks. As a result, the team found that cloud architecture had resulted in overprovisioning of resources. By analyzing the data in Dynatrace Notebooks, the team discovered, “There is too much cross-availability-zone traffic,” Greifeneder recalled. “There are way over 30 availability zones. By running those queries team found that theycould not only reduce data transfer costs but also reduce the monthly data volume by 23 petabytes–that’s massive and brings an even higher-performant Grail to you. These are both wins—business and technical.” As Greifeneder noted in the session, these new capabilities are designed to enable customers to extract greater value from the platform through multiple uses. As Greifeneder noted, the data residing in the unified platform enables a variety of uses. “It was you as customers who told me,’ Dynatrace’s data is gold.’” The post Bringing IT automation to life at Dynatrace Innovate Barcelona appeared first on Dynatrace news. View the full article
  7. VMware previewed what will become a suite of intelligent assistants that use generative artificial intelligence (AI) to automate the management of a wide range of IT tasks. View the full article
  8. In recent years, the world has witnessed a significant shift towards remote working, largely driven by global events such as the COVID-19 pandemic. This transformation has necessitated the adoption of new tools and strategies, with automation emerging as a key enabler of effective remote work. So, what does automation mean in this context? Well, it’s […] View the full article
  9. Nearly every company wants to evolve its digital transformation and increase the pace of software delivery and operations. And infrastructure automation is a new means to achieve these ends. It supports the emerging discipline of platform engineering and can increase scalability and reactivity to unforeseen events, enabling software ecosystems to be more anti-fragile. Increased infrastructure […] View the full article
  10. The amount of testing that we could be doing is massive. Most of us don’t look at testing across the spectrum and all-inclusively, but let’s do that for a second. We have functional testing at the code level, which is reasonably well automated already, if your shop is using such automation. Then we have integration […] The post Make a Plan for Test Automation appeared first on DevOps.com. View the full article
  11. Ushers in Next Era of Software Testing with Artificial Intelligence Mountain View, Calif. — Oct. 20, 2020 — Tricentis, the world’s #1 testing platform for modern cloud and enterprise applications, today announced Vision AI, the core technology that will now power Tosca. Vision AI is the industry’s most advanced AI-based test design and automation technology which allows organizations to address […] The post Tricentis Introduces Next Generation AI-Powered Test Automation appeared first on DevOps.com. View the full article
  12. The future of Fintech infrastructure is hybrid multi-cloud. Using private and public cloud infrastructure at the same time allows financial institutions to optimise their CapEx and OpEx costs. Why Private clouds? A private cloud is an integral part of a hybrid multi-cloud strategy for financial services organisations. It enables financial institutions to derive competitive advantage from agile implementations without incurring the security and business risks of a public cloud. Private clouds provide a more stable solution for financial institutions by dedicating exclusive hardware within financial firms’ own data centres. Private clouds also enable financial institutions to move from a traditional IT engagement model to a DevOps model and transform their IT groups from an infrastructure provider to a service provider (via a SaaS model). OpenStack for financial services OpenStack provides a complete ecosystem for building private clouds. Built from multiple sub-projects as a modular system, OpenStack allows financial institutions to build out a scalable private (or hybrid) cloud architecture that is based on open standards. OpenStack enables application portability among private and public clouds, allowing financial institutions to choose the best cloud for their applications and workflows at any time, without lock-in. It can also be integrated with a variety of key business systems such as Active Directory and LDAP. OpenStack software provides a solution for delivering infrastructure as a service (IaaS) to end users through a web portal and provides a foundation for layering on additional cloud management tools. These tools can be used to implement higher levels of automation and to integrate analytics-driven management applications for optimising cost, utilisation and service levels. OpenStack software provides support for improving service levels across all workloads and for taking advantage of the high availability capabilities built into cloud aware applications. In the world of Open Banking, the delivery of a financial application or digital customer service often depends on many contributors from various organisations working collaboratively to deliver results. Large financial institutions – the likes of PayPal and Wells Fargo are using OpenStack for their private cloud builds. These companies are successfully leveraging the capabilities of OpenStack software that enables efficient resource pooling, elastic scalability and self-service provisioning for end users. The Challenge The biggest challenge of OpenStack is everyday operations automation, year after year, while OpenStack continues to evolve rapidly. The Solution – Ops Automation Canonical solves this problem with total automation that decouples architectural choices from the operations codebase that supports upgrades, scaling, integration and bare metal provisioning. From bare metal to cloud control plane, Canonical’s Charmed OpenStack uses automation everywhere leveraging model-driven operations. Charmed OpenStack Charmed OpenStack is an enterprise grade OpenStack distribution that leverages MAAS, Juju, and the OpenStack charmed operators to simplify the deployment and management of an OpenStack cloud. Canonical’s Charmed OpenStack ensures private cloud price-performance, providing full automation around OpenStack deployments and operations. Together with Ubuntu, it meets the highest security, stability and quality standards in the industry. Benefits of Charmed OpenStack for fintechs Secure, compliant, hardened Canonical provides up to ten years of security updates for Charmed OpenStack under the UA-I subscription for customers who value stability above all else. Moreover, the support package includes various EU and US regulatory compliance options. Additional hardening tools and benchmarks ensure the highest level of security. Every OpenStack version supported Each upstream OpenStack version comes with new features that may bring measurable benefits to your business. We recognise that and provide full support for every version of OpenStack within two weeks of the upstream release. Every two years we release an LTS version of Charmed OpenStack which we support for five years. Upgrades included, fully automated OpenStack upgrades are known to be painful due to the complexity of the process. By leveraging the model-driven architecture and using OpenStack Charms for automation purposes, Charmed OpenStack can be easily upgraded between its consecutive versions. This allows you to stay up to date with the upstream features, while not putting additional pressure on your operations team. A case in point The client: SBI BITS SBI BITS provides IT services and infrastructure to SBI Group companies and affiliates. SBI Group is Japan’s market-leading financial services company group headquartered in Tokyo. When public cloud is not an option Operating in the highly regulated financial services industry, we need complete control over our data. If our infrastructure isn’t on-premise, it makes regulatory compliance far more complicated. Georgi Georgiev, CIO at SBI BITS The challenge With hundreds of affiliate companies relying on it for IT services, SBI BITS – the FinTech arm of SBI Group was under immense pressure to make its infrastructure available simultaneously to numerous internal clients, often with critically short time to market requirements. The solution Canonical designed and built the initial OpenStack deployment within a few weeks, and is now providing ongoing maintenance through the Ubuntu Advantage for Infrastructure enterprise support package. The initial implementation consisted of 73 nodes each at two sites, deployed as hyper-converged infrastructure and running Ubuntu 18.04. This architecture enables a software-defined approach that unlocks greater automation and more efficient resource utilisation, leading to significant cost savings. The outcome Canonical’s OpenStack deployment has streamlined the infrastructure delivery, ensuring that the company can meet the IT needs of SBI Group without the stress. Automation eliminates the majority of physical work involved in resource provisioning. Canonical delivered OpenStack at one third of the price of competing proposals. Hyper-converged architecture and full-stack support seeks to deliver both CAPEX and OPEX savings. Canonical’s solution was a third of the price of the other proposals we’d received. The solution is also proving to be highly cost-effective, both from CAPEX and OPEX perspectives. Georgi Georgiev, CIO at SBI BITS Execute your hybrid cloud strategy OpenStack gives financial institutions the ability to seamlessly move workloads from one cloud to another, whether private or public. It also accelerates time-to-market by giving a financial institutions’ business units, a self-service portal to access necessary resources on-demand, and an API driven platform for developing cloud-aware apps. OpenStack is a growing software ecosystem consisting of various interconnected components. Therefore, its operations can at times be challenging even in a fully automated environment. Canonical recognises that and offers fully managed services for organisations. Canonical’s managed OpenStack provides 24×7 cloud monitoring, daily maintenance, regular software updates, OpenStack upgrades and more. We are always here to discuss your cloud computing needs and to help you successfully execute your hybrid cloud strategy. Get in touch View the full article
  • Forum Statistics

    63.6k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...