Jump to content

Search the Community

Showing results for tags 'snowflake'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. The 2024 Snowflake Startup Challenge began with over 900 applications from startups Powered by Snowflake in more than 100 countries. Our judges narrowed that long list of contenders down to 10, and after much deliberation, they’ve now pared it down to the final three. We are pleased to announce that BigGeo, Scientific Financial Systems and SignalFlare.ai by Extropy360 will advance to the Snowflake Startup Challenge finale and compete for the opportunity to receive a share of up to $1 million in investments from Snowflake Ventures, plus exclusive mentorship and visibility opportunities from NYSE. Many thanks to the other semifinalists for their dedication and the effort they put into their presentations during the previous round of competition. Let’s get to know the 2024 finalists. BigGeo Crunching vast amounts of geospatial data is an intimidating, resource-consuming task. But the potential rewards are so rich — whether it is mapping the spread of diseases; determining optimum places for new housing developments; creating more efficient travel and shipping routes; and yes, providing more accurate and timely weather and traffic reports. BigGeo is looking to remove the intimidation factor and give companies immediate, interactive geospatial insights. “With our technology, clients can execute fast and effective geospatial queries, integrate seamlessly with Snowpark Container Services and significantly improve data visualization,” says Brent Lane, Co-Founder and CEO of BigGeo. “This makes geospatial insights more accessible and actionable than ever before, empowering organizations to make informed decisions quickly.” BigGeo’s mission is to convert the theoretical advancements uncovered during the founders’ 15 years of research into practical, market-ready solutions. Its Volumetric and Surface-Level Discrete Global Grid System (DGGS), which manages surface-level, subsurface and aerial data, supports the integration of diverse data forms, including 2D and 3D data, and facilitates dynamic interactions with spatial data. The containerized deployment within Snowflake Native Apps allows interoperability across various data sets and enables secure, governed geospatial AI. The ability to handle large volumes of real-time geospatial data and meet customers’ complex analysis demands is a particular point of pride for the BigGeo team. One of their customers, a major data supplier, used the solution to stream near real-time visualizations of a massive 150 million polygon data set at sub-second speeds, surpassing the capabilities of competing solutions. By directly connecting the visualization layer to the data supplier’s data warehouse, BigGeo enabled informed decision-making directly through the map. “This accolade has definitely energized the entire BigGeo team to continue developing solutions that address real-world challenges, drive significant industry change, and build environmentally conscious solutions that align with our vision for a greener future,” says Brett Jones, Co-Founder and President of BigGeo. The team is excited to present at the Startup Challenge finale. Not only is it an opportunity to expand BigGeo’s network and sharpen its competitive edge, but the team hopes to gain valuable insights from top-tier leaders. “We are very excited to meet Lynn Martin, President of NYSE Group. She has a profound understanding of technology’s crucial role in data integration, transformation and management — areas central to our work,” says Lane. “Her passion for AI and its role in enhancing data services also aligns with what we do at BigGeo.” Scientific Financial Systems Beating the market is the driving force for investment management firms. In today’s markets, that often means making quick calculations over vast volumes of data to locate those scarce alpha opportunities. This is a difficult and time-consuming task, one that spurred Scientific Financial Systems (SFS) to develop a new solution: Quotient. Quotient enables financial institutions to rapidly analyze large amounts of data and provide relevant recommendations quickly. Running natively on Snowflake, Quotient uses a novel semantic layer that integrates Python and SQL technologies. For SFS, Snowflake was in the right place at the right time: Quotient embodies the concept of “localized compute” and was an ideal candidate for the Snowflake Native App model, which helps SFS address scalability. “We are thrilled to share our story about building our applications on top of Snowflake and leveraging the power that Snowflake offers in security, performance and scalability,” says Anne Millington, Co-Founder and CEO. “It is very rewarding to be recognized for the innovations we offer.” The structure and power of Quotient give investment managers the tools they need to find increased alpha outperformance. For small investment managers, Quotient data science and ML techniques provide an immediate and incredibly robust quant bench. For large investment firms, SFS provides a framework to streamline and improve data analytics so their teams can spend more time on research. StarMine’s quantitative analytics research team has seen the benefits for itself. The team focuses on developing financial models based on the evaluation of factors that may impact equity performance. The Snowflake Data Cloud provides an ideal environment to evaluate factors by running them against vast amounts of historical data. With the colocation of Quotient compute and StarMine’s data, research that previously took two to three weeks can be completed in one to two days. Plus, StarMine was able to run the factor based on a broad global universe without restriction and see the results before making further customizations to drill into specific equity criteria. With full transparency into the Quotient engine’s calculations, StarMine has confidence in the results. As for the SFS leadership team, they are honing their presentation for the Snowflake Startup Challenge finale and looking forward to making their pitch. Given their focus on investment firms, winning mentorship from NYSE companies would be “a tremendous honor,” says Millington. “Gaining insight into the needs and perspectives of these NYSE-listed companies would offer great value to the SFS team on multiple levels, from product roadmap considerations, technology implications for AI and NLP, operational implications and more. The expertise and experience of learning from real-world examples at the highest echelon of success would be invaluable,” she explains. SignalFlare.ai by Extropy360 “We are beyond excited, and frankly a bit shocked” to be a Snowflake Startup Challenge finalist, says Michael Lukianoff, Founder and CEO of SignalFlare.ai. “The team and I come from years of experience in brick-and-mortar restaurant tech and data — which has never been held in the same regard as e-commerce or social media tech. We feel like this honor is not just a recognition of SignalFlare.ai, but of the industry we represent, where the opportunities are boundless.” Those opportunities are the reason why SignalFlare.ai’s founders created a decision intelligence platform for chain restaurants. Devastated by the impact of COVID-19, the restaurant industry needed to reinvent how it analyzed demand and use data to make better decisions in high-impact areas, like pricing, promotion, menus and new market opportunities. SignalFlare.ai built new methods, tapped into new data and created a different tech stack, developing a solution, with Snowflake at its core, that incorporates geospatial data for targeting, along with ML models for pricing optimization and risk simulation. Snowflake’s architecture allows visibility into data transformation strategies and performant cross-customer analytics. The team implements Dynamic Tables to ensure the timeliness of changing source data and filters results specific to target analytics. Streamlit apps assist in monitoring incoming data quality and Snowpark integrating ML models for training and returning inferences to Snowflake for downstream analytics. Authentic Restaurant Brands, a restaurant acquisition fund that is part of SignalFlare’s “innovation circle” of customers — essentially a test group and sounding board for new ideas and products — has become an avid user of SignalFlare. The company started by validating the SignalFlare solution and pricing method on one brand; after seeing the benefits, it added two more and recently added a fourth after an acquisition. “I have worked with many pricing vendors in my career. SignalFlare’s approach is the most thorough and cost-effective I have encountered,” says Jorge Zaidan, Chief Strategy Officer of Authentic Restaurant Brands. Looking ahead to the finale, the SignalFlare team is eager to present and to meet Benoit Dageville, Co-Founder and President of Product at Snowflake, who is part of the Startup Challenge judging panel. “The vision that Benoit brought to life made our vision possible,” says Lukianoff, noting that Snowflake was “life-altering” for people like himself, who are obsessed with data accuracy and usability. “The features being released are constantly making our job easier and more efficient, and creating more opportunities. That is a different experience from any technology partner I’ve experienced.” Next up: Startup Challenge Finale in San Francisco Want to see which of these three startups will take the top prize? Register for Dev Day now to see the live finale and experience all of the developer-centric demos and sessions, discussions, expert Q&As and hands-on labs designed to set you up for AI/ML and app dev success. It’s also never too soon to start thinking about the next Snowflake Startup Challenge: Complete this form to get notified when the 2025 contest opens later this year. The post Meet the 2024 Snowflake Startup Challenge Finalists appeared first on Snowflake. View the full article
  2. The Snowflake Summit 2024 is all set to bring together data, AI and tech to discuss the advancements and cutting-edge innovation in Data cloud. It is an unmissable opportunity to connect with data experts to explore the limitless possibilities of AI in data and emerging trends in application development. Hevo is thrilled to be at […]View the full article
  3. Snowflake Partner Connect is a marketplace where Snowflake users can discover and integrate a diverse range of third-party solutions and services to enhance their data analytics, warehousing, and management capabilities. We are thrilled to announce that Hevo Data is now available on Snowflake Partner Connect. This exciting partnership allows Snowflake customers to seamlessly connect to […]View the full article
  4. Integrating the on-premise data present in the database into a data warehouse has become an essential part of every business workflow. By doing so, organizations tend to take a more data-driven approach and are able to decide what steps to take for better business performance. Amazon RDS Oracle is a popular relational database service that […]View the full article
  5. Your organization may choose Microsoft SQL Server (MSSQL) on AWS RDS to store its operational data because there are no upfront investments. With AWS RDS MSSQL, you only need to pay for what your organization utilizes. In today’s dynamic business world, achieving the maximum value from your data is crucial. To do so, you must […]View the full article
  6. Carrying out an insightful data analysis for your business requires having the ability to not only store or access data, but also to transform it into a form, that can be used to draw powerful and holistic insights. This article focuses on Snowflake Analytics and provides you with a comprehensive list of some of the […]View the full article
  7. If you’re a Snowflake customer using ServiceNow’s popular SaaS application to manage your digital workloads, data integration is about to get a lot easier — and less costly. Snowflake has announced the general availability of the Snowflake Connector for ServiceNow, available on Snowflake Marketplace. The connector provides immediate access to up-to-date ServiceNow data without the need to manually integrate against API endpoints. With just a few clicks, you can get ServiceNow data directly in your Snowflake account and combine it with other data sources, including ERP, HR and CRM systems. Once configured, the data will automatically refresh based on your desired frequency. Additionally, because the Snowflake Connector for ServiceNow is a Snowflake Native App built in the Data Cloud using the Snowflake Native App Framework, it leverages Snowflake’s built-in security and simplified governance capabilities. Customers get a fully managed service with no additional access fees set by Snowflake. Ensono, a managed service provider and technology adviser, joined the initial preview phase of the Snowflake Connector for ServiceNow and began using it as part of its customer portal and data warehouse modernization project (watch their Show Me Your Architecture webinar here). The company was able to work with the connector throughout the course of the project, and its ease of use and ability to streamline data delivery helped Ensono achieve a remarkable 30% reduction in costs compared to the prior solution. Here’s how. The costs of complexity Ensono, a global firm with more than 2,900 associates worldwide, offers flexible, personalized technology services and solutions for organizations across a variety of industries. To monitor the performance of mainframes, servers and cloud systems, the company needs to analyze relevant metadata and promptly address any concerns or issues. ServiceNow is Ensono’s primary system for both internal service ticket tracking and customer ticket management — and a valuable source of data for performance analysis. To extract data from ServiceNow, the company had been using a legacy tool to replicate data to other systems, including SQL Server databases, the Envision portal, its data warehouse, SLA tracking and reporting. During the data migration part of its modernization efforts, Ensono found itself syncing four different instances of ServiceNow, each requiring a separate server and license. Multiple teams were needed to maintain the complex environment. Additionally, data delivery to the Envision customer portal was heavily manual and inefficient, and internal and external ServiceNow service tickets were getting bogged down. The process quickly became costly and took away valuable time from other IT projects that could have added value to the business. In addition, the legacy tool caused performance issues to the source system due to the resource strain from the APIs. The savings of simplicity To manage these challenges, Ensono engaged Evolution Analytics, a data analytics consultancy. Adopting Snowflake as a platform gave Ensono a single source of connected data to use across its enterprise. And the Snowflake Connector for ServiceNow enabled both an initial load of historical data as well as incremental updates, giving Ensono a way to get ServiceNow data directly into its Snowflake account within minutes. The Snowflake Connector for ServiceNow makes it possible to move ServiceNow data into Snowflake quickly and easily. With control over how frequently it is refreshed, Ensono can be sure the latest ServiceNow data is always available. These updates improved the output of the Envision portal. The company was also able to seamlessly share performance data with their clients through Snowflake Secure Data Sharing. This foundation makes rolling data out to customers a minor change for Ensono, allowing them to scale and support more customers easily. Also, customers can see only their own data, which preserves privacy. Additionally, there is no longer any performance impact on the source system because there are no APIs — all of the analytical work is done in Snowflake. And Ensono’s end customers were not impacted by all the changes and updates. Some of their bigger clients are building out processes to automate the loading of data into their own ServiceNow instances, so they can see their Ensono tickets tracked in the system. The connector has helped Ensono save on resource and hardware costs, not to mention licensing fees. Previously, multiple teams worked for hours on data integration; Snowflake now handles it all in minutes. Each ServiceNow instance used to require a different server for a target database and a license; with Snowflake, no additional hardware or licenses are needed. Consolidating the systems and removing extra hardware and licenses also reduced the amount of time and resources spent on maintaining the legacy tool’s VM costs and SQL Server licenses. With the implementation of Snowflake, Ensono’s data philosophy changed to one that is built on ease of ingestion. If it needs to share data with other parties, it will use Snowflake Secure Data Sharing to simplify the collaboration process. If an additional system or application needs to be connected to Snowflake, Ensono will leverage Snowflake-provided connectors where available, making it easy to link systems. This also ties into a broader Snowflake benefit: being able to scale on-demand to exactly the size you need to deliver top performance. No need to build an extra-large server and let it sit idle, or exceed the threshold of a legacy system and have to deal with the choke point. Snowflake’s scalability ensures Ensono always has the right amount of resources at all the right times. Try Snowflake Connector for ServiceNow for yourself To learn more about Ensono’s work with Snowflake and the native Snowflake Connector for ServiceNow, watch our Show Me Your Architecture webinar, How Ensono Leverages ServiceNow Data in Snowflake to Deliver Better Customer Outcomes. Try out the Snowflake Connector for ServiceNow by installing it from Snowflake Marketplace. It’s available with no additional access fees set by Snowflake and uses Snowflake credits for compute and storage. You can also explore the product documentation and go through the QuickStart. The post Ensono Cuts Costs with Snowflake Connector for ServiceNow appeared first on Snowflake. View the full article
  8. Data-driven decision-making is one of the core principles of many organizations these days. Companies spend thousands of dollars working on data and interpreting its behavior to increase sales and scale their business. Moving all the company data from a relational database to a fully managed data warehousing service is an essential step to consider. This […]View the full article
  9. Legacy security information and event management (SIEM) solutions, like Splunk, are powerful tools for managing and analyzing machine-generated data. They have become indispensable for organizations worldwide, particularly for security teams. But as much as security operation center (SOC) analysts have come to rely on solutions like Splunk, there is one complaint that comes up for some: Costs can quickly add up. The issue centers around their volume-based pricing model. This model can force security teams to make difficult decisions on what data to ingest. There are a number of online threads — see here, here and here just to link to a few — dedicated to how best to control costs, while limiting how much an organization has to compromise its security. But what if security teams didn’t have to make tradeoffs? This blog post explores how Snowflake can help with this challenge. Let’s start with five cost factors organizations need to consider with their legacy SIEM solution and how Snowflake can help. Legacy SIEM cost factors to keep in mind Data ingestion: Traditional SIEMs often impose limits to data ingestion and data retention. Snowflake allows security teams to store all their data in a single platform and maintain it all in a readily accessible state, with virtually unlimited cloud data storage capacity. Now there are a few ways to ingest data into Snowflake. Security sources can be ingested directly through native means such as streaming, stages, syslog, native connectors or secure data sharing. Snowflake’s Snowpipe service helps bring in new data easily, at a price that is tailored to an organization’s needs. The most common method is Snowpipe auto ingest, which works for security teams who regularly ingest machine data. But this method isn’t for everyone because loading small amounts of data slowly or many small files can cost more than other options. Snowpipe Streaming is another method that can save security teams money. With Snowpipe Streaming there’s no need to prepare files before loading, making the cost of getting data more predictable. Security teams can also reduce their costs by loading certain datasets in batches instead of continuously. For example, they could load a lot of data that isn’t needed for instant detection three times a day instead of constantly streaming that data, which can lead to more significant savings. Data retention: Many legacy SIEMS delete activity logs, transaction records, and other details from their systems after a few days, weeks or months. With Snowflake, security teams don’t have to work around these data retention windows. Instead, all data is always accessible for analysis, which simplifies cost planning and the data management strategy. It also provides more reliable generation of key security metrics such as visibility coverage, SLA performance, mean time to detect (MTTD) and mean-time-to-respond (MTTR). Snowflake also helps security teams save time by automatically compressing and encrypting the data, making it ready to query. Detection and investigation processing: Security teams depend on detection rules to find important events automatically. These rules need computing power to analyze data and spot attacks. In the cloud, computing can be measured in various ways, like bytes scanned or CPU cycles. This affects how much it costs and how predictable the costs are for processing detections. While computing costs might not have been a concern with fixed hardware in the past, it’s a whole new game in the cloud. For security teams, investigations require computational power to analyze collected data similar to running detections. Some solutions utilize different engines, such as stream or batch processing, for detections and investigations, while others employ the same engine for both tasks. Snowflake helps security teams understand how the query engine functions at a basic level, which helps them effectively plan for the cost estimates of their investigations. Moving away from volume ingest-based pricing A traditional SIEM typically manages all the data ingestion, transformation, detection and investigation processing for security teams. While out-of-the-box connectors and normalization can be useful, customers end up paying more by the nature of legacy SIEMs that use ingest volume-based pricing models. It’s important here to understand how this pricing model works. Ingest volume-based pricing can vary among the different legacy SIEM vendors but the basic principle remains the same: the more data security teams send to the SIEM for analysis, the higher the cost. By moving away from traditional volume-based pricing models, security teams can gain more control of what logs they have access to and how much they are spending. A consumption-based pricing model, like Snowflake’s, allows security teams to have all the data on hand while paying for only the compute resources they use, making security more cost-effective. Snowflake’s pricing model is designed to offer flexibility and scalability, giving security teams the ability to only pay for the resources they use without being tied to long-term contracts or upfront commitments. How Snowflake Works An open-architecture deployment with a modern security data lake, and best-of-breed applications from Snowflake, can keep costs down while improving an organization’s security posture. A security data lake eliminates data silos by removing limits on ingest and retention. Organizations can use a security data lake to scale resources up and down automatically and only pay for the resources they use — potentially controlling their costs without compromising their security. Security data lakes can also help analysts apply complex detection logic and security policies to log data and security tool output. Security analysts can quickly join security logs with contextual data sets, such as asset inventory, user details, configuration details, and other information, to eliminate would-be false positives, and identify stealthy threats. The value proposition is clear: organizations can consolidate their security data affordably and gain the flexibility to query that data at any time. Snowflake empowers organizations to make data-driven choices for long-term gain. We’ll dive into some customer success stories to show the potential of this approach. Real customer success stories If done right, Snowflake customers can experience remarkable cost savings. Let’s take a closer look at some notable success stories across various industries. At Comcast, Snowflake’s security data lake is now an integral component of their security data fabric. Instead of employees managing on-premises infrastructure, the Comcast security data lake built on Snowflake’s elastic engine in the cloud stores over 10 petabytes (PBs) of data with hot retention for over a year, saving millions of dollars. Automated sweeps of over 50,000 indicators of compromise (IOCs) across the 10-PB security data lake can now be completed in under 30 minutes. Guild Education can claim “up to 50% cost savings” working with Snowflake and is just one example that highlights the potentially significant financial benefits organizations can unlock with the Snowflake Data Cloud. By adopting Snowflake as its data lake for security events, corporate travel management company Navan achieved a best-of-breed security architecture that is both cost-efficient and cutting-edge. The results are impressive: Over 70% cost savings by adopting a modern SIEM-less architecture 15K+ hours saved in 8 months 4x improvements in MITRE ATT&CK coverage in 8 months Ready to witness the transformative power of Snowflake? Watch our demo and discover how you can revolutionize your data management strategy, unlock substantial cost savings, and propel your organization into a new era of efficiency and innovation. Learn how you can augment your Splunk strategy with Snowflake today. The post How to Navigate the Costs of Legacy SIEMS with Snowflake appeared first on Snowflake. View the full article
  10. Amazon Data Firehose (Firehose) now offers direct integration with Snowflake Snowpipe Streaming. Firehose enables customers to reliably capture, transform, and deliver data streams into Amazon S3, Amazon Redshift, Splunk, and other destinations for analytics. With this new feature, customers can stream clickstream, application, and AWS service logs from multiple sources, including Kinesis Data Streams, to Snowflake. With a few clicks, customers can setup a Firehose stream to deliver data to Snowflake. Firehose automatically scales to stream gigabytes of data, and records are available in Snowflake within seconds. View the full article
  11. In today’s data-driven world, developer productivity is essential for organizations to build effective and reliable products, accelerate time to value, and fuel ongoing innovation. To deliver on these goals, developers must have the ability to manipulate and analyze information efficiently. Yet while SQL applications have long served as the gateway to access and manage data, Python has become the language of choice for most data teams, creating a disconnect. Recognizing this shift, Snowflake is taking a Python-first approach to bridge the gap and help users leverage the power of both worlds. Our previous Python connector API, primarily available for those who need to run SQL via a Python script, enabled a connection to Snowflake from Python applications. This traditional SQL-centric approach often challenged data engineers working in a Python environment, requiring context-switching and limiting the full potential of Python’s rich libraries and frameworks. Since the previous Python connector API mostly communicated via SQL, it also hindered the ability to manage Snowflake objects natively in Python, restricting data pipeline efficiency and the ability to complete complex tasks. Snowflake’s new Python API (in public preview) marks a significant leap forward, offering a more streamlined, powerful solution for using Python within your data pipelines — and furthering our vision to empower all developers, regardless of experience, with a user-friendly and approachable platform. A New Era: Introducing Snowflake’s Python API With the new Snowflake Python API, readily available through pip install snowflake, developers no longer need to juggle between languages or grapple with cumbersome syntax. They can effortlessly leverage the power of Python for a seamless, unified experience across Snowflake workloads encompassing data engineering, Snowpark, machine learning and application development. This API is a testament to Snowflake’s commitment to a Python-first approach, offering a plethora of features designed to streamline workflows and enhance developer productivity. Key benefits of the new Snowflake Python API include: Simplified syntax and intuitive API design: Featuring a Pythonic design, the API is built on the foundation of REST APIs, which are known for their clarity and ease of use. This allows developers to interact with Snowflake objects naturally and efficiently, minimizing the learning curve and reducing development time. Rich functionality and support for advanced operations: The API goes beyond basic operations, offering comprehensive functionality for managing various Snowflake resources and performing complex tasks within your Python environment. This empowers developers to maximize the full potential of Snowflake through intuitive REST API calls. Enhanced performance and improved scalability: Designed with performance in mind, the API leverages the inherent scalability of REST APIs, enabling efficient data handling and seamless scaling to meet your growing data needs. This allows your applications to handle large data sets and complex workflows efficiently. Streamlined integration with existing tools and frameworks: The API seamlessly integrates with popular Python data science libraries and frameworks, enabling developers to leverage their existing skill sets and workflows effectively. This integration allows developers to combine the power of Python libraries with the capabilities of Snowflake through familiar REST API structures. By prioritizing the developer experience and offering a comprehensive, user-friendly solution, Snowflake’s new Python API paves the way for a more efficient, productive and data-driven future. Getting Started with the Snowflake Python API Our Quickstart guide makes it easy to see how the Snowflake Python API can manage Snowflake objects. The API allows you to create, delete and modify tables, schemas, warehouses, tasks and much more. In this Quickstart, you’ll learn how to perform key actions — from installing the Snowflake Python API to retrieving object data and managing Snowpark Container Services. Dive in to experience how the enhanced Python API streamlines your data workflows and unlocks the full potential of Python within Snowflake. To get started, explore the comprehensive API documentation, which will guide you through every step. We recommend that Python developers prioritize the new API for data engineering tasks since it offers a more intuitive and efficient approach compared to the legacy SQL connector. While the Python API connector remains available for specific SQL use cases, the new API is designed to be your go-to solution. By general availability, we aim to achieve feature parity, empowering you to complete 100% of your data engineering tasks entirely through Python. This means you’ll only need to use SQL commands if you truly prefer them or for rare unsupported functionalities. The New Wave of Native DevOps on Snowflake The Snowflake Python API release is among a series of native DevOps tools becoming available on the Snowflake platform — all of which aim to empower developers of every experience level with a user-friendly and approachable platform. These benefits extend far beyond the developer team. The 2023 Accelerate State of DevOps Report, the annual report from Google Cloud’s DevOps Research and Assessment (DORA) team, reveals that a focus on user-centricity around the developer experience leads to a 40% increase in organizational performance. With intuitive tools for data engineers, data scientists and even citizen developers, Snowflake strives to enhance these advantages by fostering collaboration across your data and delivery teams. By offering the flexibility and control needed to build unique applications, Snowflake aims to become your one-stop shop for data — minimizing reliance on third-party tools for core development lifecycle use cases and ultimately reducing your total cost of ownership. We’re excited to share more innovations soon, making data even more accessible for all. For a deeper dive into Snowflake’s Python API and other native Snowflake DevOps features, register for the Snowflake Data Cloud Summit 2024. Or, experience these features firsthand at our free Dev Day event on June 6th in the Demo Zone. The post Snowflake’s New Python API Empowers Data Engineers to Build Modern Data Pipelines with Ease appeared first on Snowflake. View the full article
  12. In March, Snowflake announced exciting releases, including advances in AI and ML with new features in Snowflake Cortex, new governance and privacy features in Snowflake Horizon, and broader developer support with the Snowflake CLI. Read on to learn more about everything we announced last month. Snowflake Cortex LLM Functions – in public preview Snowflake Cortex is an intelligent, fully managed service that delivers state-of-the-art large language models (LLMs) as serverless SQL/Python functions; there are no integrations to set up, data to move or GPUs to provision. In Snowflake Cortex, there are task-specific functions that teams can use to quickly and cost-effectively execute complex tasks, such as translation, sentiment analysis and summarization. Additionally, to build custom apps, teams can use the complete function to run custom prompts using LLMs from Mistral AI, Meta and Google. Learn more. Streamlit Streamlit 1.26 – in public preview We’re excited to announce support for Streamlit version 1.26 within Snowflake. This update, in preview, expands your options for building data apps directly in Snowflake’s secure environment. Now you can leverage the latest features and functionalities available in Streamlit 1.26.0 — including st.chat_input and st.chat_message, two powerful primitives for creating conversational interfaces within your data apps. This addition allows users to interact with your data applications using natural language, making them more accessible and user-friendly. You can also utilize the new features of Streamlit 1.26.0 to create even more interactive and informative data visualizations and dashboards. To learn more and get started, head over to the Snowflake documentation. Snowflake Horizon Sensitive Data Custom Classification – in public preview In addition to using standard classifiers in Snowflake, customers can now also write their own classifiers using SQL with custom logic to define what data is sensitive to their organization. This is an important enhancement to data classification and provides the necessary extensibility that customers need to detect and classify more of their data. Learn more. Data Quality Monitoring – in public preview Data Quality Monitoring is a built-in solution with out-of-the-box metrics, like null counts, time since the object was last updated and count of rows inserted into an object. Customers can even create custom metrics to monitor the quality of data. They can then effectively monitor and report on data quality by defining the frequency it is automatically measured and configure alerts to receive email notifications when quality thresholds are violated. Learn more. Snowflake Data Clean Rooms – generally available in select regions Snowflake Data Clean Rooms allow customers to unlock insights and value through secure data collaboration. Launched as a Snowflake Native App on Snowflake Marketplace, Snowflake Data Clean Rooms are now generally available to customers in AWS East, AWS West and Azure West. Snowflake Data Clean Rooms make it easy to build and use data clean rooms for both technical and non-technical users, with no additional access fees set by Snowflake. Find out more in this blog. DevOps on Snowflake Snowflake CLI – public preview The new Snowflake CLI is an open source tool that empowers developers with a flexible and extensible interface for managing the end-to-end lifecycle of applications across various workloads (Snowpark, Snowpark Container Services, Snowflake Native Applications and Streamlit in Snowflake). It offers features such as user-defined functions, stored procedures, Streamlit integration and direct SQL execution. Learn more. Snowflake Marketplace Snowflake customers can tap into Snowflake Marketplace for access to more than 2,500 live and ready-to-query third-party data, apps and AI products all in one place (as of April 10, 2024). Here are all the providers who launched on Marketplace in March: AI/ML Products Brillersys – Time Series Data Generator Atscale, Inc. – Semantic Modeling Data paretos GmbH – Demand Forecasting App Connectors/SaaS Data HALitics – eCommerce Platform Connector Developer Tools DataOps.live – CI/CD, Automation and DataOps Data Governance, Quality and Cost Optimization Select Labs US Inc. – Snowflake Performance & Cost Optimization Foreground Data Solutions Inc – PII Data Detector CareEvolution – Data Format Transformation Merse, Inc – Snowflake Performance & Cost Optimization Qbrainx – Snowflake Performance & Cost Optimization Yuki – Snowflake Performance Optimization DATAN3RD LLC – Data Quality App Third-Party Data Providers Upper Hand – Sports Facilities & Athletes Data Sporting Group – Sportsbook Data Quiet Data – UK Company Data Manifold Data Mining – Demographics Data in Canada SESAMm – ESG Controversy Data KASPR Datahaus – Internet Quality & Anomaly Data Blitzscaling – Blockchain Data Starlitics – ETF and Mutual Fund Data SFR Analytics – Geographic Data SignalRank – Startup Data GfK SE – Purchasing Power Data —- ​​Forward-Looking Statement This post contains express and implied forward-looking statements, including statements regarding (i) Snowflake’s business strategy, (ii) Snowflake’s products, services, and technology offerings, including those that are under development or not generally available, (iii) market growth, trends, and competitive considerations, and (iv) the integration, interoperability, and availability of Snowflake’s products with and on third-party platforms. These forward-looking statements are subject to a number of risks, uncertainties, and assumptions, including those described under the heading “Risk Factors” and elsewhere in the Quarterly Reports on Form 10-Q and Annual Reports of Form 10-K that Snowflake files with the Securities and Exchange Commission. In light of these risks, uncertainties, and assumptions, actual results could differ materially and adversely from those anticipated or implied in the forward-looking statements. As a result, you should not rely on any forward-looking statements as predictions of future events. © 2024 Snowflake Inc. All rights reserved. Snowflake, the Snowflake logo, and all other Snowflake product, feature, and service names mentioned herein are registered trademarks or trademarks of Snowflake Inc. in the United States and other countries. All other brand names or logos mentioned or used herein are for identification purposes only and may be the trademarks of their respective holder(s). Snowflake may not be associated with, or be sponsored or endorsed by, any such holder(s). The post New Snowflake Features Released in March 2024 appeared first on Snowflake. View the full article
  13. Today’s fast-paced world demands timely insights and decisions, which is driving the importance of streaming data. Streaming data refers to data that is continuously generated from a variety of sources. The sources of this data, such as clickstream events, change data capture (CDC), application and service logs, and Internet of Things (IoT) data streams are proliferating. Snowflake offers two options to bring streaming data into its platform: Snowpipe and Snowflake Snowpipe Streaming. Snowpipe is suitable for file ingestion (batching) use cases, such as loading large files from Amazon Simple Storage Service (Amazon S3) to Snowflake. Snowpipe Streaming, a newer feature released in March 2023, is suitable for rowset ingestion (streaming) use cases, such as loading a continuous stream of data from Amazon Kinesis Data Streams or Amazon Managed Streaming for Apache Kafka (Amazon MSK). Before Snowpipe Streaming, AWS customers used Snowpipe for both use cases: file ingestion and rowset ingestion. First, you ingested streaming data to Kinesis Data Streams or Amazon MSK, then used Amazon Data Firehose to aggregate and write streams to Amazon S3, followed by using Snowpipe to load the data into Snowflake. However, this multi-step process can result in delays of up to an hour before data is available for analysis in Snowflake. Moreover, it’s expensive, especially when you have small files that Snowpipe has to upload to the Snowflake customer cluster. To solve this issue, Amazon Data Firehose now integrates with Snowpipe Streaming, enabling you to capture, transform, and deliver data streams from Kinesis Data Streams, Amazon MSK, and Firehose Direct PUT to Snowflake in seconds at a low cost. With a few clicks on the Amazon Data Firehose console, you can set up a Firehose stream to deliver data to Snowflake. There are no commitments or upfront investments to use Amazon Data Firehose, and you only pay for the amount of data streamed. Some key features of Amazon Data Firehose include: Fully managed serverless service – You don’t need to manage resources, and Amazon Data Firehose automatically scales to match the throughput of your data source without ongoing administration. Straightforward to use with no code – You don’t need to write applications. Real-time data delivery – You can get data to your destinations quickly and efficiently in seconds. Integration with over 20 AWS services – Seamless integration is available for many AWS services, such as Kinesis Data Streams, Amazon MSK, Amazon VPC Flow Logs, AWS WAF logs, Amazon CloudWatch Logs, Amazon EventBridge, AWS IoT Core, and more. Pay-as-you-go model – You only pay for the data volume that Amazon Data Firehose processes. Connectivity – Amazon Data Firehose can connect to public or private subnets in your VPC. This post explains how you can bring streaming data from AWS into Snowflake within seconds to perform advanced analytics. We explore common architectures and illustrate how to set up a low-code, serverless, cost-effective solution for low-latency data streaming. Overview of solution The following are the steps to implement the solution to stream data from AWS to Snowflake: Create a Snowflake database, schema, and table. Create a Kinesis data stream. Create a Firehose delivery stream with Kinesis Data Streams as the source and Snowflake as its destination using a secure private link. To test the setup, generate sample stream data from the Amazon Kinesis Data Generator (KDG) with the Firehose delivery stream as the destination. Query the Snowflake table to validate the data loaded into Snowflake. The solution is depicted in the following architecture diagram. Prerequisites You should have the following prerequisites: An AWS account and access to the following AWS services: AWS Identity and Access Management (IAM) Kinesis Data Streams Amazon S3 Amazon Data Firehose Familiarity with the AWS Management Console. A Snowflake account. A key pair generated and your user configured to connect securely to Snowflake. For instructions, refer to the following: Generate the private key Generate a public key Store the private and public keys securely Assign the public key to a Snowflake user Verify the user’s public key fingerprint An S3 bucket for error logging. The KDG set up. For instructions, refer to Test Your Streaming Data Solution with the New Amazon Kinesis Data Generator. Create a Snowflake database, schema, and table Complete the following steps to set up your data in Snowflake: Log in to your Snowflake account and create the database: create database adf_snf; Create a schema in the new database: create schema adf_snf.kds_blog; Create a table in the new schema: create or replace table iot_sensors (sensorId number, sensorType varchar, internetIP varchar, connectionTime timestamp_ntz, currentTemperature number ); Create a Kinesis data stream Complete the following steps to create your data stream: On the Kinesis Data Streams console, choose Data streams in the navigation pane. Choose Create data stream. For Data stream name, enter a name (for example, KDS-Demo-Stream). Leave the remaining settings as default. Choose Create data stream. Create a Firehose delivery stream Complete the following steps to create a Firehose delivery stream with Kinesis Data Streams as the source and Snowflake as its destination: On the Amazon Data Firehose console, choose Create Firehose stream. For Source, choose Amazon Kinesis Data Streams. For Destination, choose Snowflake. For Kinesis data stream, browse to the data stream you created earlier. For Firehose stream name, leave the default generated name or enter a name of your preference. Under Connection settings, provide the following information to connect Amazon Data Firehose to Snowflake: For Snowflake account URL, enter your Snowflake account URL. For User, enter the user name generated in the prerequisites. For Private key, enter the private key generated in the prerequisites. Make sure the private key is in PKCS8 format. Do not include the PEM header-BEGIN prefix and footer-END suffix as part of the private key. If the key is split across multiple lines, remove the line breaks. For Role, select Use custom Snowflake role and enter the IAM role that has access to write to the database table. You can connect to Snowflake using public or private connectivity. If you don’t provide a VPC endpoint, the default connectivity mode is public. To allow list Firehose IPs in your Snowflake network policy, refer to Choose Snowflake for Your Destination. If you’re using a private link URL, provide the VPCE ID using SYSTEM$GET_PRIVATELINK_CONFIG: select SYSTEM$GET_PRIVATELINK_CONFIG(); This function returns a JSON representation of the Snowflake account information necessary to facilitate the self-service configuration of private connectivity to the Snowflake service, as shown in the following screenshot. For this post, we’re using a private link, so for VPCE ID, enter the VPCE ID. Under Database configuration settings, enter your Snowflake database, schema, and table names. In the Backup settings section, for S3 backup bucket, enter the bucket you created as part of the prerequisites. Choose Create Firehose stream. Alternatively, you can use an AWS CloudFormation template to create the Firehose delivery stream with Snowflake as the destination rather than using the Amazon Data Firehose console. To use the CloudFormation stack, choose Generate sample stream data Generate sample stream data from the KDG with the Kinesis data stream you created: { "sensorId": {{random.number(999999999)}}, "sensorType": "{{random.arrayElement( ["Thermostat","SmartWaterHeater","HVACTemperatureSensor","WaterPurifier"] )}}", "internetIP": "{{internet.ip}}", "connectionTime": "{{date.now("YYYY-MM-DDTHH:m:ss")}}", "currentTemperature": {{random.number({"min":10,"max":150})}} } Query the Snowflake table Query the Snowflake table: select * from adf_snf.kds_blog.iot_sensors; You can confirm that the data generated by the KDG that was sent to Kinesis Data Streams is loaded into the Snowflake table through Amazon Data Firehose. Troubleshooting If data is not loaded into Kinesis Data Steams after the KDG sends data to the Firehose delivery stream, refresh and make sure you are logged in to the KDG. If you made any changes to the Snowflake destination table definition, recreate the Firehose delivery stream. Clean up To avoid incurring future charges, delete the resources you created as part of this exercise if you are not planning to use them further. Conclusion Amazon Data Firehose provides a straightforward way to deliver data to Snowpipe Streaming, enabling you to save costs and reduce latency to seconds. To try Amazon Kinesis Firehose with Snowflake, refer to the Amazon Data Firehose with Snowflake as destination lab. About the Authors Swapna Bandla is a Senior Solutions Architect in the AWS Analytics Specialist SA Team. Swapna has a passion towards understanding customers data and analytics needs and empowering them to develop cloud-based well-architected solutions. Outside of work, she enjoys spending time with her family. Mostafa Mansour is a Principal Product Manager – Tech at Amazon Web Services where he works on Amazon Kinesis Data Firehose. He specializes in developing intuitive product experiences that solve complex challenges for customers at scale. When he’s not hard at work on Amazon Kinesis Data Firehose, you’ll likely find Mostafa on the squash court, where he loves to take on challengers and perfect his dropshots. Bosco Albuquerque is a Sr. Partner Solutions Architect at AWS and has over 20 years of experience working with database and analytics products from enterprise database vendors and cloud providers. He has helped technology companies design and implement data analytics solutions and products. View the full article
  14. Huge performance-boosting opportunities await those who choose the optimal data warehouse for their business. Identifying custom data points that steer your organizations’ successful outcomes is crucial. Decision-making is optimized through sophisticated means of accessing and analyzing your company’s data. As the use of data warehouses grows exponentially, consumer choices become additionally more challenging to discern […]View the full article
  15. With RudderStack, you can build your customer data platform on top of Snowflake and keep control of your data.View the full article
  16. In 2020, Snowflake announced a new global competition to recognize the work of early-stage startups building their apps — and their businesses — on Snowflake, offering up to $250,000 in investment as the top prize. Four years later, the Snowflake Startup Challenge has grown into a premiere showcase for emerging startups, garnering interest from companies in over 100 countries and offering a prize package featuring a portion of up to $1 million in potential investment opportunities and exclusive mentorship and marketing opportunities from NYSE. This year’s entries presented an impressively diverse set of use cases. The list of Top 10 semi-finalists is a perfect example: we have use cases for cybersecurity, gen AI, food safety, restaurant chain pricing, quantitative trading analytics, geospatial data, sales pipeline measurement, marketing tech and healthcare. Just as varied was the list of Snowflake tech that early-stage startups are using to drive their innovative entries. Snowflake Native Apps (generally available on AWS and Azure, private preview on GCP) and Snowpark Container Services (currently in public preview) were exceptionally popular, which speaks to their flexibility, ease of use and business value. In fact, 8 of the 10 startups in our semi-finalist list plan to use one or both of these technologies in their offerings. We saw a lot of interesting AI/ML integrations and capabilities plus the use of Dynamic Tables (currently in public preview), UDFs and stored procedures, Streamlit, and Streamlit in Snowflake. Many entries also used Snowpark, taking advantage of the ability to work in the code they prefer to develop data pipelines, ML models and apps, then execute in Snowflake. Our sincere thanks go out to everyone who participated in this year’s competition. We recognize the amount of work involved in your entries, and we appreciate every submission. Let’s meet the 10 companies competing for the 2024 Snowflake Startup Challenge crown! BigGeo BigGeo accelerates geospatial data processing by optimizing performance and eliminating challenges typically associated with big data. Built atop BigGeo’s proprietary Volumetric and Surface-Level Discrete Global Grid System (DGGS), which manages surface-level, subsurface and aerial data, BigGeo Search allows you to perform geospatial queries against large geospatial data sets and high speeds. Capable of a headless deployment into Snowpark Container Services, BigGeo can be used to speed up queries of data stored in Snowflake, gather those insights into a dashboard, visualize them on a map, and more. Implentio Implentio is a centralized tool that helps ecommerce ops and finance teams efficiently and cost-effectively manage fulfillment and logistics spending. The solution ingests, transforms and centralizes large volumes of operations data from disparate systems and applies AI and ML to deliver advanced optimizations, insights and analyses that help teams improve invoice reconciliation and catch 3PL/freight billing errors. Innova-Q Focusing on food safety and quality, Innova-Q’s Quality Performance Forecast Application delivers near real-time insights into product and manufacturing process performance so companies can assess and address product risks before they affect public safety, operational effectiveness or direct costs. The Innova-Q dashboard provides access to product safety and quality performance data, historical risk data, and analysis results for proactive risk management. Leap Metrics Leap Metrics is a SaaS company that seeks to improve health outcomes for populations with chronic conditions while reducing the cost of care. Their analytics-first approach to healthcare leverages AI-powered insights and workflows through natively integrated data management, analytics and care management solutions. Leap Metrics’ Sevida platform unifies actionable analytics and AI with intelligent workflows tailored for care teams for an intuitive experience. Quilr Quilr’s adaptive protection platform uses AI and the principle of human-centric security to reduce incidents caused by human errors, unintentional insiders and social engineering. It provides proactive assistance to employees before they perform an insecure action, without disrupting business workflow. Quilr also gives organizations visibility into their Human Risk Posture to better understand what risky behaviors their users are performing, and where they have process or control gaps that could result in breaches. Scientific Financial Systems Beating the market is the driving force for investment management firms — but beating the market is not easy. SFS’s Quotient provides a unique set of analytics tools based on data science and ML best practices that rapidly analyzes large amounts of data and enables accurate data calculations at scale, with full transparency into calculation details. Quotient automates data management, time-series operations and production so investment firms can focus on idea generation and building proprietary alpha models to identify market insights and investment opportunities. SignalFlare.ai by Extropy 360 Pricing and analytics for chain restaurants is the primary focus of SignalFlare.ai, a decision intelligence solution that combines ML models for price optimization and risk simulation with geospatial expertise. Restaurants can use SignalFlare to refine and analyze customer and location data so they can better capture price opportunities and drive customer visits. Stellar Stellar is designed to make generative AI easy for Snowflake customers. It deploys gen AI components as containers on Snowpark Container Services, close to the customer’s data. Stellar Launchpad gives customers a conversational way to analyze and synthesize structured and unstructured data to power AI initiatives, making it possible to deploy multiple gen AI apps and virtual assistants to meet the demand for AI-driven business outcomes. Titan Systems Titan helps enterprises to manage, monitor and scale secure access to data in Snowflake with an infrastructure-as-code approach. Titan Core analyzes each change to your Snowflake account and evaluates them against a set of security policies, then rejects changes that are out of compliance to help catch data leaks before they happen. Vector Vector is a relationship intelligence platform that alerts sellers when they can break through the noise by detecting existing relationships between target accounts and happy customers, execs and investors. Vector can infer who knows whom and their connections by analyzing terabytes of contact, business, experience and IP data to determine digital fingerprints, attributes and shared experiences. What’s next: Preparing the perfect pitch In Round 2, each of these semi-finalists will create an investor pitch video, and their leadership team will be interviewed by the judges to discuss the company’s entry, the product and business strategy, and what the company would do with an investment should it win the 2024 Snowflake Startup Challenge. Based on this information, the judges will select three finalists, to be announced in May. Those three companies will present to our esteemed judging panel — Benoit Dageville, Snowflake Co-Founder and President of Product; Denise Persson, Snowflake CMO; Lynn Martin, NYSE Group President; and Brad Gerstner, Altimeter Founder and CEO — during the Startup Challenge Finale at Dev Day in San Francisco on June 6. The judges will ask questions and deliberate live before naming the 2024 Grand Prize winner. Register for Dev Day now to see the live finale and experience all of the developer-centric demos and sessions, discussions, expert Q&As and hands-on labs designed to set you up for AI/ML and app dev success. Congratulations to all of the semi-finalists, and best of luck in the next round! The post Snowflake Startup Challenge 2024: Announcing the 10 Semi-Finalists appeared first on Snowflake. View the full article
  17. Snowflake names RudderStack One to Watch in the Analytics Category category of their annual Modern Marketing Data Stack Report. View the full article
  18. Salesforce’s Customer Data Platform, Genie, relies on open data sharing with Snowflake. Does this signal a paradigm shift for the Customer 360? View the full article
  19. Data analytics involves storing, managing, and processing data from different sources and analyzing it thoroughly to develop solutions for our business problems. While JSON helps to interchange data between different web applications and sources through API connectors, Snowflake assists you in analyzing that data with its intuitive features. Therefore, JSON Snowflake data migration is crucial […]View the full article
  20. The most in-depth guide to Snowflake pricing - take a look in this technical deep dive. View the full article
  21. Data transformation is the process of converting data from one format to another, the “T” in ELT, or extract, load, transform, which enables organizations to get their data analytics-ready and derive insights and value from it. As companies collect more data, from disparate sources and in disparate formats, building and managing transformations has become exponentially more complex and time-consuming. The Snowflake Data Cloud includes powerful capabilities for transforming data and orchestrating data pipelines, and we partner with best-in-class providers to give customers a choice in the data transformation technologies they use. Today, we are excited to announce that Snowflake Ventures is investing in our partner, Coalesce, which offers an intuitive, low-code transformation platform for developing and managing data pipelines. The Coalesce platform is uniquely built for Snowflake. Coalesce allows data teams to build complex transformations quickly and efficiently without deep coding expertise, while still providing all the extensibility the most technical Snowflake users will need. This expands the number of users who can contribute to data projects and enhances collaboration. Coalesce automatically generates Snowflake-native SQL and supports Snowflake data engineering features such as Snowpark, Dynamic Tables, AI/ML capabilities, and more. Our investment helps Coalesce to continue providing first-class experiences for Snowflake users, including integrating closely to take advantage of the latest Data Cloud innovations. Coalesce will also lean into Snowpark Container Services and the Snowflake Native App Framework to provide a seamless user experience. With Snowflake Native Apps, customers can instantly deploy Coalesce on their Snowflake account and transact directly through Snowflake Marketplace. Our goal at Snowflake is to provide developers, data engineers, and other users with optimal choice in the tools they use to prepare and manage data. We will continue to add new transformation capabilities to the Data Cloud and look forward to working with Coalesce to provide the best possible experience for transforming data so organizations can unlock the full potential of their data. The post Snowflake Ventures Invests in Coalesce to Enable Simplified Data Transformation Development and Management Natively on the Data Cloud appeared first on Snowflake. View the full article
  22. In December 2023, Snowflake announced its acquisition of data clean room technology provider Samooha. Samooha’s intuitive UI and focus on reducing the complexity of sharing data led to it being named one of the most innovative data science companies of 2024 by Fast Company. Now, Samooha’s offering is integrated into Snowflake and launched as Snowflake Data Clean Rooms, a Snowflake Native App on Snowflake Marketplace, generally available to customers in AWS East, AWS West and Azure West. Snowflake Data Clean Rooms make it easy to build and use data clean rooms in Snowflake, with no additional access fees set by Snowflake. What is a data clean room? Data clean rooms provide a controlled environment that allows multiple companies, or divisions of a company, to securely collaborate on sensitive or regulated data while fully preserving the privacy of the enterprise data. Enterprises should not have to make challenging trade-offs between following compliance regulations and making sensitive data available for collaboration. With data clean rooms, organizations have an opportunity to unlock the value of sensitive data by allowing for joint data analytics, machine learning and AI by anonymizing, processing and storing personally identifiable information (PII) in a compliant way. Data clean rooms allow for multiple parties to securely collaborate on sensitive or regulated data, surfacing valuable insights while preserving the privacy of the data. How does a data clean room work? Data clean rooms can be used to control the following: What data comes into the clean room How the data in the clean room can be joined to other data in the clean room What types of analytics each party can perform on the data What data, if any, can leave the clean room Any sensitive or regulated data, such as PII, that is loaded into the clean room is encrypted. The clean room provider has full control over the clean room environment, while approved partners can get a feed with anonymized data. Why Snowflake Data Clean Rooms? Until now, data clean room technology was generally deployed by large organizations with access to technical data privacy experts. Snowflake Data Clean Rooms remove the technical and financial barriers, allowing companies of all sizes to easily build, use and benefit from data clean rooms. Unlock value with data clean rooms easily and at no additional license cost Teams can stand up new data clean rooms quickly, easily and at no additional license fees through an app that is available on Snowflake Marketplace. Built for business and technical users alike, Snowflake Data Clean Rooms allow organizations to unlock value from data faster with industry-specific workflows and templates such as audience overlap, reach and frequency, last touch attribution and more. As a Snowflake Native App, Snowflake Data Clean Rooms makes it easy for technical and business users to build and use data clean rooms in Snowflake. Tap into the open and interoperable ecosystem of the Snowflake Data Cloud The Snowflake Data Cloud provides an open, neutral and interoperable data clean room ecosystem that allows organizations to collaborate with all their partners seamlessly, regardless of whether they have their own Snowflake accounts. Companies can also leverage turnkey third-party integrations and solutions for data enrichment, identity, activation and more across providers. Snowflake Data Clean Rooms allows you to collaborate with your partners seamlessly across regions and clouds thanks to Cross-Cloud Snowgrid (Snowflake Data Clean Rooms is currently available in AWS East/West and Azure West). It provides a cross-cloud technology layer that allows you to interconnect your business’ ecosystems across regions and clouds and operate at scale. Take advantage of Snowflake’s built-in privacy and governance features Unlock privacy-enhanced collaboration on your sensitive data through an app built on the Snowflake Native App Framework. By bringing the clean room solution to your data, Snowflake Data Clean Rooms removes the need for data to ever leave the governance, security and privacy parameters of Snowflake. By leveraging Snowpark for AI/ML, cryptographic compute support, differential privacy models, security attestation guarantees and more, Snowflake Data Clean Rooms helps companies maintain privacy while allowing for deeper analytical insight with business partners. You can easily integrate data and activation partners to realize use cases in marketing, advertising, and across other industries. Snowflake Data Clean Rooms is a Snowflake Native App that runs directly in your Snowflake account, eliminating the need to move or copy data out of the governance, security and privacy parameters of Snowflake. Data clean room use cases across industries Key use cases for data clean rooms are found in marketing, media and advertising. However, organizations across industries are realizing value with data clean rooms, including financial services and healthcare and life sciences. Attribution for advertising and marketing One popular use case for data clean rooms is to link anonymized marketing and advertising data from multiple parties for attribution. Suppose a company has its own first-party data containing attributes about its customers and their associated sales SKUs. In that case, the company can use a data clean room to improve audience insights for advertising. Let’s say the company wants to find new customers with the same attributes as its best customers, and combine those attributes with other characteristics to drive upsell opportunities. To create the target segments and comply with privacy requirements, the company uploads its data into a clean room that it creates or is shared by its ad partner. Participants can securely join any first-party data without exposing IDs. Without a data clean room, only limited amounts of data could flow between the various parties due to data privacy, regulations and competitive concerns. Measurement for advertising and marketing Another key data clean room use case is the measurement of the effectiveness of advertising and marketing campaigns. Advertisers want to understand who saw an advertisement, for example, as well as who engaged with it. This information will be distributed across the different media partners it takes to serve an ad to a consumer. Creating a joint analysis across the data of these different media partners is important for advertisers to understand campaign results and to optimize future campaigns. Such measurement can only be realized through a data clean room as it protects the sensitivity of the consumer data across all parties while surfacing valuable analytical insights. Monetizing proprietary data The omnichannel customer journey is complex, and it rarely starts with a brand’s advertisement. For example, if a consumer is planning an upcoming purchase of a kitchen appliance, the journey is likely to start with online review sites. A reviews site collects top-of-funnel data that would be invaluable to the appliance brand. With a data clean room, the reviews website could create a compliant third-party data product, manage access to it through the clean room, and monetize it. Consumer goods-retail collaboration Data clean rooms allow retailers and consumer goods companies to collaborate with brands that advertise with them. For example, a retailer can share transaction data in a privacy- and governance-friendly manner to provide insights into conversion signals and enable better targeting, personalization and attribution. Enhancing financial service customer data Similar to use cases in marketing, data clean rooms enable financial institutions to securely collaborate across a variety of use cases like credit fraud modeling and money laundering. Sensitive financial consumer data can be enhanced with second and third-party data sources and analyzed across institutional boundaries to detect anomalous patterns and behaviors, all while protecting consumer data privacy. Enriching patient health data In healthcare and life sciences, a hospital can use data clean rooms to share regulated patient data with a pharmaceutical company. The company can enrich and analyze the data to identify patterns in patient outcomes across clinical trials. The data clean room environment enables the patient data to remain private while still contributing to meaningful insights. Learn more about Snowflake Data Clean Rooms Get started today with Snowflake Data Clean Rooms: visit the listing on Snowflake Marketplace for additional details. To see a demo of Snowflake Data Clean Rooms, register for Snowflake’s virtual Accelerate Advertising, Media, & Entertainment event and learn how media and advertising organizations collaborate in the Media Data Cloud to enhance business growth and data monetization, develop new products, and harness the power of AI and ML. The post Snowflake Data Clean Rooms: Securely Collaborate to Unlock Insights and Value appeared first on Snowflake. View the full article
  23. As organizations seek to drive more value from their data, observability plays a vital role in ensuring the performance, security and reliability of applications and pipelines while helping to reduce costs. At Snowflake, we aim to provide developers and engineers with the best possible observability experience to monitor and manage their Snowflake environment. One of our partners in this area is Observe, which offers a SaaS observability product that is built and operated on the Data Cloud. We’re excited to announce today that Snowflake Ventures is making an investment in Observe to significantly expand the observability experience we provide for our customers. Following the investment, Observe plans to develop best-in-class observability features that will help our customers monitor and manage their Snowflake environments even more effectively. Solutions such as out-of-the-box dashboards and new visualizations will empower developers and engineers to accelerate their work and troubleshoot problems more quickly and easily. In addition, because Observe is built on the Data Cloud, our customers will have the option to keep their observability data within their Snowflake account instead of sending it out to a third-party provider. This further simplifies and enhances their data governance by allowing them to keep more of their data within the secure environment of their Snowflake account. Observe is an example of how more companies are building and operating SaaS applications on the Data Cloud. By doing so, these companies gain access to our scalable infrastructure and powerful analytics while being able to offer a more advanced and differentiated experience to Snowflake customers. We will continue to expand the signals we provide for developers and engineers to manage, monitor and troubleshoot their workloads in the Data Cloud. Our partnerships with companies like Observe help turn signals into actionable insights that are presented in compelling and innovative ways. The post Snowflake Invests in Observe to Expand Observability in the Data Cloud appeared first on Snowflake. View the full article
  24. Snowflake is committed to helping our customers unlock the power of artificial intelligence (AI) to drive better decisions, improve productivity and reach more customers using all types of data. Large Language Models (LLMs) are a critical component of generative AI applications, and multimodal models are an exciting category that allows users to go beyond text and incorporate images and video into their prompts to get a better understanding of the context and meaning of the data. Today we are excited to announce we’re furthering our partnership with Reka to support its suite of highly capable multimodal models in Snowflake Cortex. This includes Flash, an optimized model for everyday questions and developing support for Core, Reka’s largest and most performant model. This will allow our customers to seamlessly unlock value from more types of data with the power of multimodal AI in the same environment where their data lives, protected by the built-in security and governance of the Snowflake Data Cloud. Reka’s latest testing reveals that both Flash and Core are highly capable with Core’s capabilities approaching GPT-4 and Gemini Ultra, making it one of the most capable LLMs available today. In addition to expanding our partnership with NVIDIA to power gen AI applications and enhance model performance and scalability, our partnership with Reka and other LLM providers are the latest examples of how Snowflake is accelerating our AI capabilities for customers. Snowflake remains steadfast in our commitment to make AI secure, easy to use and quick-to-implement, for both business and technical users. Taken together, our partnerships and investments in AI ensure we continue to provide customers with maximum choice around the tools and technologies they need to build powerful AI applications. The post Snowflake Brings Gen AI to Images, Video and More With Multimodal Language Models from Reka in Snowflake Cortex appeared first on Snowflake. View the full article
  25. Performance tuning in Snowflake is optimizing the configuration and SQL queries to improve the efficiency and speed of data operations. It involves adjusting various settings and writing queries to reduce execution time and resource consumption, ultimately leading to cost savings and enhanced user satisfaction. Performance tuning is crucial in Snowflake for several reasons: View the full article
  • Forum Statistics

    63.6k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...