Jump to content

Search the Community

Showing results for tags 'automation'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOps Forum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes
    • Linux
    • Logging, Monitoring & Observability
    • Red Hat OpenShift
    • Security
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 10 results

  1. As organisations embark on becoming more digitally enabled, the road is paved with many twists and turns. Data science teams and analysts, as well as the teams they serve, know full well that the path to analytic excellence is not linear. The good news is organisations have opportunities to unlock value at each step along the way. The pattern by which companies develop strength within their data is highly repeatable, underpinned by more ways to manipulate data and unlock the benefits of automation. While automation in and of itself isn’t digital transformation, since new processes are not being launched, it frequently delivers huge value and lays the framework for organizations to make major operational improvements. With automation in place, organizations can harness more analytical approaches with modelling enhanced by AI and ML. Once these core capabilities move out of the sole domain of technical IT teams and are put into the hands of more domain experts, true transformation of business process occurs and more overall value is derived from analytics. Delivering value from the start Automation is typically one of the earliest steps in overhauling enterprise analytics. In my experience, this step won’t deliver as much value as those that follow – but it’s still significant and, beyond that, vital. Let’s take a large manufacturer automating its VAT tax recovery process as an example. While some might assume that this type of automation simply saves time, many companies are not recovering 100% of their VAT because the manual, legacy process has a cost, and if the VAT is below a given value, it might not be worth the recovery. When this process is automated, 100% VAT recovery yields become possible – the hard cash savings for the business can’t be ignored. Finance teams can automate many of the manual processes required to close their books each quarter, reducing the time it takes to close from a matter of weeks to days. Audit teams can upgrade from manual audits repeated every couple of years to continuous audits which check for issues daily and report any issues automatically and instantly. From reducing cost and risk to increasing revenue and saving time for employees (your greatest asset), automation is having a huge impact on organizations around the globe. With this lens, it’s evident that automation amounts to much more than time savings. Two varying approaches There are two very different approaches that organizations have historically taken to drive automation. The first, which has a more limited impact, is to form a centralized team and have that small team attempt to automate processes around the business. The second approach is to upscale employees to allow every worker to be capable of automating a process. This latter approach can scale at a very different pace and impact. Organizations can upskill tens of thousands of employees and automate millions of manual processes. This would be very difficult with a small team trying to perform the same automation. It can lead to substantial business benefits, including increased productivity, reduced costs and greater revenue. Historically, of course, the latter approach has also been nigh on impossible to execute – given the requirement for familiarity with coding language to use code-heaving technologies. But that was then – today, when mature low-code systems present a massive opportunity to upskill employees to automate processes simply by asking the right questions. This isn’t simply an alternative route – it should be the only route for organizations that are serious about achieving analytical excellence. Code-free platforms remove the need for departments to wait in queues for the IT teams to deliver an application that fits their needs. It puts the power of automated analytical and development capabilities into the hands of business domain experts with the specific expertise needed to get valuable insight from analytics quicker. Therefore, upskilling efforts need to be directed towards making such a broad data-literate culture possible. Providing teams with automation tools For many organisations, a common strategy for driving upskilling and capability is to focus on its new employees. With attrition and growth rates at many businesses ranging between 5 and 10%, organisations can face the challenge of replacing as much as a quarter of their entire team moving on every 18 months. Providing training and technology for inevitable new joiners to automate processes is therefore essential for every department to cut time to drive efficiencies and upskill the overall workforce base. This is already taking place within the education sector, with many schools beginning to implement automation technologies and analytic techniques in their curriculum, particularly in business schools and accounting, marketing and supply chain courses. Businesses that do not take notice and look to prioritize these skills as well will likely not only continue to suffer from the inefficiencies of manual processes but could also risk the attrition cost of failing to provide their employees with the modern tools that are being taught in the base curriculums of these degree programs. Automation is the first step towards analytics excellence, but its relevance doesn’t stop there. It’s through automation that leaders can unlock clear, traceable benefits for their organizations in terms of overhauled processes as well as setting them on the right path when it comes to upskilling and democratizing data. We've listed the best online courses and online class sites. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro View the full article
  2. In the previous blog post of this series, we discussed the crucial role of Dynatrace as an orchestrator that steps in to stop the testing phase in case of any errors. Additionally, Dynatrace equips SREs and application teams with valuable insights powered by Davis® AI. In this blog post of the series, we will explore the use of Site Reliability Guardian (SRG) in more detail. SRG is a potent tool that automates the analysis of release impacts, ensuring validation of service availability, performance, and capacity objectives throughout the application ecosystem by examining the effect of advanced test suites executed earlier in the testing phase. Validation stage overview The validation stage is a crucial step in the CI/CD (Continuous Integration/Continuous Deployment) process. It involves carefully examining the test results from the previous testing phase. The main goal of this stage is to identify and address any issues or problems that were detected. Doing so reduces the risk of production disruptions and instills confidence in both SREs (Site Reliability Engineers) and end-users. Depending on the outcome of the examination, the build is either approved for deployment to the production environment or rejected. Challenges of the validation stage In the Validation phase, SREs face specific challenges that significantly slow down the CI/CD pipeline. Foremost among these is the complexity associated with data gathering and analysis. The burgeoning reliance on cloud technology stacks amplifies this challenge, creating hurdles due to budgetary constraints, time limitations, and the potential risk of human errors. Additionally, another pivotal challenge arises from the time spent on issue identification. Both SREs and application teams invest substantial time and effort in locating and rectifying software glitches within their local environments. These prolonged processes not only strain resources but also introduce delays within the CI/CD pipeline, hampering the timely release of new features to end-users. Mitigate challenges with Dynatrace With the support of Dynatrace Grail™, AutomationEngine, and the Site Reliability Guardian, SREs and application teams are assisted in making informed release decisions by utilizing telemetry observability and other insights. Additionally, the Visual Resolution Path within generated problem reports helps in reproducing issues in their environments. The Visual Resolution Path offers a chronological overview of events detected by Dynatrace across all components linked to the underlying issue. It incorporates the automatic discovery of newly generated compute resources and any static resources that are in play. This view seamlessly correlates crucial events across all affected components, eliminating the manual effort of sifting through various monitoring tools for infrastructure, process, or service metrics. As a result, businesses and SREs can redirect their manual diagnostic efforts toward fostering innovation. Configure an action for the Site Reliability Guardian in the workflow. The action should focus on validating the guardian’s adherence to the application ecosystem’s specific objectives (SLOs). Additionally, align the action’s validation window with the timeframe derived from the recently completed test events. As the action begins, the Site Reliability Guardian (SRG) evaluates the set objective by analyzing the telemetry data produced during advanced test runs. At the same time, SRG uses DAVIS_EVENTS to identify any potential problems which could result in one of two outcomes. Outcome #1: Build promotion Once the newly developed code is in line with the objectives outlined in the Guardian—and assuming that Davis AI doesn’t generate any new events—the SRG action activates the successful path in the workflow. This path includes a JavaScript action called promote_jenkins_build, which triggers an API call to approve the build being considered, leading to the promotion of the build deployment to production. Outcome #2: Build rejection If Davis AI generates any issue events related to the wider application ecosystem or if any of the objectives configured from the defined guardian are not met, the build rejection workflow is automatically initiated. This triggers the disapprove_jenkins_build JavaScript action, which leads to the rejection of the build. Moreover, by utilizing helpful service analysis tools such as Response Time Hotspots and Outliers, SREs can easily identify the root cause of any issues and save considerable time that would otherwise be spent on debugging or taking necessary actions. SREs can also make use of the Visual Resolution Path to recreate the issues on their setup or identify the events for different components that led to the issue. In both scenarios, a Slack message is sent to the SREs and the impacted app team, capturing the build promotion or rejection.The telemetry data’s automated analytics, powered by SRG and Davis AI, simplify the process of promoting builds. This approach effectively tackles the challenges that come with complex application ecosystems. Additionally, the integration of service tools and Visual Resolution Path helps to identify and fix issues more quickly, resulting in an improved mean time to repair (MTTR). Validation in the platform engineering context Dynatrace—essential within the realm of platform engineering—streamlines the validation process, providing critical insights into performance metrics and automating the identification of build failures. By leveraging SRG and Visual Resolution Path, along with Davis AI causal analysis, development teams can quickly pinpoint issues, and further rectify them ensuring a fail-smart approach. The integration of service analysis tools further enhances the validation phase by automating code-level inspections and facilitating timely resolutions. Through these orchestrated efforts, platform engineering promotes a collaborative environment, enabling more efficient validation cycles and fostering continuous enhancement in software quality and delivery. In conclusion, the integration of Dynatrace observability provides several advantages for SREs and DevOps, enabling them to enhance the key DORA metrics: Deployment Frequency: Improved deployment rate through faster and more informed decision-making. SREs gain visibility into each stage, allowing them to build faster and promptly address issues using the Dynatrace feature set. Change Lead Time: Enhanced efficiency across stages with Dynatrace observability and security tools, leading to quicker postmortems and fewer interruption calls for SREs. Change Failure Rate: Reduction in incidents and rollbacks achieved by utilizing “Configuration Change” events or deployment and annotation events in Dynatrace. This enables SREs to allocate their time more effectively to proactively address actual issues instead of debugging underlying problems. Time to restore service: While these proactive approaches can help improve Deployment Frequency and Change Lead Time, telemetry observability data with Dynatrace AI causation engine Davis AI can aid in improving Time to restore service. In addition, Dynatrace can leverage the events and telemetry data that it receives during the Continuous Integration/Continuous Deployment (CI/CD) pipeline to construct dashboards. By using JavaScript and DQL, these dashboards can help generate reports on the current DORA metrics. This method can be expanded to gain a better understanding of the SRG executions, enabling us to pinpoint the responsible guardians and the SLOs managed by various teams and identify any instances of failure. Addressing such failures can lead to improvements and further enhance the DORA metrics. Below is a sample dashboard that provides insights into DORA and SRG execution. In the next blog post, we’ll discuss the integration of security modules into the DevOps process with the aim of achieving DevSecOps. Additionally, we’ ll explore the incorporation of Chaos Engineering during the testing stage to enhance the overall reliability of the DevSecOps cycle. We’ll ensure that these efforts don’t affect the Time to Restore Service turnaround build time and examine how we can improve the fifth key DORA metric, Reliability. What’s next? Curious to see how it all works? Contact us to schedule a demo and we’ll walk you through the various workflows, JavaScript tasks, and the dashboards discussed in this blog series. Contact us to schedule a demo and we’ll walk you through the various workflows, JavaScript tasks, and the dashboards discussed in this blog series. Contact Sales If you’re an existing Dynatrace Managed customer looking to upgrade to Dynatrace SaaS, see How to start your journey to Dynatrace SaaS. The post Automate CI/CD pipelines with Dynatrace: Part 4, Validation stage appeared first on Dynatrace news. View the full article
  3. ETL, or Extract, Transform, Load, serves as the backbone for data-driven decision-making in today's rapidly evolving business landscape. However, traditional ETL processes often suffer from challenges like high operational costs, error-prone execution, and difficulty scaling. Enter automation—a strategy not merely as a facilitator but a necessity to alleviate these burdens. So, let's dive into the transformative impact of automating ETL workflows, the tools that make it possible, and methodologies that ensure robustness. The Evolution of ETL Gone are the days when ETL processes were relegated to batch jobs that ran in isolation, churning through records in an overnight slog. The advent of big data and real-time analytics has fundamentally altered the expectations from ETL processes. As Doug Cutting, the co-creator of Hadoop, aptly said, "The world is one big data problem." This statement resonates more than ever as we are bombarded with diverse, voluminous, and fast-moving data from myriad sources. View the full article
  4. Andrew Lennox is a passionate IT professional responsible for implementing strategic and transformational initiatives to support business development.View the full article
  5. While customers use observability platforms to identify issues in cloud environments, the next chapter of value is removing manual work from processes and building IT automation into their organizations. At the Dynatrace Innovate conference in Barcelona, Bernd Greifeneder, Dynatrace chief technology officer, discussed key examples of how the Dynatrace observability platform delivers value well beyond traditional monitoring. While the Dynatrace observability platform has long delivered return on investment for customers in identifying root cause, it has evolved to address far more use cases for observability and security, to address customers’ specific needs. Bernd Greifeneder outlines key use cases for IT automation at Dynatrace Innovate. “How do we make the data accessible beyond the use cases that Dynatrace gives you, beyond observability and security?” Greifeneder said. “You have business use cases that are unique to your situation that we can’t anticipate.” Greifeneder noted that now, with Dynatrace AutomationEngine, which triggers automated workflows, teams can mature beyond executing tasks manually. They can be proactive in identifying issues in cloud environments. “It allows you to take answers into automation, into action, whether It’s predictive or reactive,” Greifeneder explained. The road to modern observability As organizations continue to operate in the cloud, they discover that cloud observability becomes paramount to their ability to run efficiently and securely. Observability can help them identify problems in their cloud environments and provide information about the precise root cause of a problem. For Dynatrace customers, identifying root cause comes via Davis AI, the Grail data lakehouse, which unifies data silos, and Smartscape, which topologically maps all cloud entities and relationships. As a result, customers can identify the precise source of issues in their multi- and hybrid cloud environments. But as Greifeneder noted, “just having the answers alone isn’t enough.” Organizations need to incorporate IT automation into their processes to take action on the data they gather. As customers continue to operate in the cloud, their needs for efficiency and cost-effectiveness only grow. As a result, they need to build in greater efficiency through IT automation. Data suggests that companies have come to recognize the importance of building in this efficiency. According to Deloitte’s global survey of executives, 73% of respondents said their organizations have embarked on a path to intelligent automation. “That’s why we added to Dynatrace AutomationEngine, to run workflows, or use code to automate,” Greifeneder explained. “It allows you to take answers into automation, into action, whether It’s predictive or reactive. Both are possible, tied to business or technical value–It’s all there,” he said. This is precisely how executives anticipate getting value out of IT in the coming years. Indeed, according to a Gartner 2022 survey, 80% of executives believe that AI can be applied to any business decision. Three use cases for IT automation and optimization: How Dynatrace uses Dynatrace 1. Developer observability as a service. For developers, IT automation can bring significant productivity boosts in software development. According to one recent survey, 71% of respondents said that building out developer automation was a key initiative. With Grail, for example, a DevOps team can pre-scan logs. The plattorm can pre-analyze and classify logs . With this process, DevOps teams can identify whether code includes a high-priority bug that has to be fixed immediately. By taking this approach, Dynatrace itself reduced bugs by 36% before flaws arrived in production-level code. 2. Security. As security threats continue to mount, it’s impossible for teams to identify threats with manual work alone. “Security has to be automated,” As Greifeneder noted, identifying threats quickly is critical to prevent applications and data from being compromised. “When there is something urgent, every second matters,” Greifeneder said. He noted that Dynatrace teams used the platform to reduce time spent identifying and resolving security issues. By unifying all relevant events in Grail, teams could identify suspicious activity, then have the platform automatically trigger the steps to analyze those activities. Next, the platform can automatically classify activities that needed immediate action or route information to the right team to take action. As Greifeneder noted, Dynatrace teams reduced the entire process of identifying and addressing security vulnerabilities from days to minutes. “This is massive improvement of security and massive productivity gain.” Greifeneder noted. Moreover, data is scattered and needs to be unified. “All security data is in silos.” The goal for modern observability is to automate the process of identifying suspicious activity. The Dynatrace platform uses AutomationEngine and workflows, automatically triggering steps to analyze threats. 3. Data-driven cloud optimization. In this use case, Greifeneder said, the goal was to optimize the Dynatrace platform itself and make it “as performant and cost-effective” for customers as possible. The Dynatrace team gathered cloud billing data, infrastructure data, networking data, and analyzed that data in Dynatrace Notebooks. As a result, the team found that cloud architecture had resulted in overprovisioning of resources. By analyzing the data in Dynatrace Notebooks, the team discovered, “There is too much cross-availability-zone traffic,” Greifeneder recalled. “There are way over 30 availability zones. By running those queries team found that theycould not only reduce data transfer costs but also reduce the monthly data volume by 23 petabytes–that’s massive and brings an even higher-performant Grail to you. These are both wins—business and technical.” As Greifeneder noted in the session, these new capabilities are designed to enable customers to extract greater value from the platform through multiple uses. As Greifeneder noted, the data residing in the unified platform enables a variety of uses. “It was you as customers who told me,’ Dynatrace’s data is gold.’” The post Bringing IT automation to life at Dynatrace Innovate Barcelona appeared first on Dynatrace news. View the full article
  6. VMware previewed what will become a suite of intelligent assistants that use generative artificial intelligence (AI) to automate the management of a wide range of IT tasks. View the full article
  7. In recent years, the world has witnessed a significant shift towards remote working, largely driven by global events such as the COVID-19 pandemic. This transformation has necessitated the adoption of new tools and strategies, with automation emerging as a key enabler of effective remote work. So, what does automation mean in this context? Well, it’s […] View the full article
  8. Nearly every company wants to evolve its digital transformation and increase the pace of software delivery and operations. And infrastructure automation is a new means to achieve these ends. It supports the emerging discipline of platform engineering and can increase scalability and reactivity to unforeseen events, enabling software ecosystems to be more anti-fragile. Increased infrastructure […] View the full article
  9. The amount of testing that we could be doing is massive. Most of us don’t look at testing across the spectrum and all-inclusively, but let’s do that for a second. We have functional testing at the code level, which is reasonably well automated already, if your shop is using such automation. Then we have integration […] The post Make a Plan for Test Automation appeared first on DevOps.com. View the full article
  10. Ushers in Next Era of Software Testing with Artificial Intelligence Mountain View, Calif. — Oct. 20, 2020 — Tricentis, the world’s #1 testing platform for modern cloud and enterprise applications, today announced Vision AI, the core technology that will now power Tosca. Vision AI is the industry’s most advanced AI-based test design and automation technology which allows organizations to address […] The post Tricentis Introduces Next Generation AI-Powered Test Automation appeared first on DevOps.com. View the full article
  • Forum Statistics

    42.5k
    Total Topics
    42.3k
    Total Posts
×
×
  • Create New...