Search the Community
Showing results for tags 'gemini'.
-
The journey of going from data to insights can be fragmented, complex and time consuming. Data teams spend time on repetitive and routine tasks such as ingesting structured and unstructured data, wrangling data in preparation for analysis, and optimizing and maintaining pipelines. Obviously, they’d rather prefer doing higher-value analysis and insights-led decision making. At Next ‘23, we introduced Duet AI in BigQuery. This year at Next ‘24, Duet AI in BigQuery becomes Gemini in BigQuery which provides AI-powered experiences for data preparation, analysis and engineering as well as intelligent recommendations to enhance user productivity and optimize costs. "With the new AI-powered assistive features in BigQuery and ease of integrating with other Google Workspace products, our teams can extract valuable insights from data. The natural language-based experiences, low-code data preparation tools, and automatic code generation features streamline high-priority analytics workflows, enhancing the productivity of data practitioners and providing the space to focus on high impact initiatives. Moreover, users with varying skill sets, including our business users, can leverage more accessible data insights to effect beneficial changes, fostering an inclusive data-driven culture within our organization." said Tim Velasquez, Head of Analytics, Veo Let’s take a closer look at the new features of Gemini in BigQuery. Accelerate data preparation with AI Your business insights are only as good as your data. When you work with large datasets that come from a variety of sources, there are often inconsistent formats, errors, and missing data. As such, cleaning, transforming, and structuring them can be a major hurdle. To simplify data preparation, validation, and enrichment, BigQuery now includes AI augmented data preparation that helps users to cleanse and wrangle their data. Additionally we are enabling users to build low-code visual data pipelines, or rebuild legacy pipelines in BigQuery. Once the pipelines are running in production, AI assists with finding and resolving issues such as schema or data drift, significantly reducing the toil associated with maintaining a data pipeline. Because the resulting pipelines run in BigQuery, users also benefit from integrated metadata management, automatic end-to-end data lineage, and capacity management. Gemini in BigQuery provides AI-driven assistance for users to clean and wrangle data Kickstart the data-to-insights journey Most data analysis starts with exploration — finding the right dataset, understanding the data’s structure, identifying key patterns, and identifying the most valuable insights you want to extract. This step can be cumbersome and time-consuming, especially if you are working with a new dataset or if you are new to the team. To address this problem, Gemini in BigQuery provides new semantic search capabilities to help you pinpoint the most relevant tables for your tasks. Leveraging the metadata and profiling information of these tables from Dataplex, Gemini in BigQuery surfaces relevant, executable queries that you can run with just one click. You can learn more about BigQuery data insights here. Gemini in BigQuery suggests executable queries for tables that you can run in single click Reimagine analytics workflows with natural language To boost user productivity, we’re also rethinking the end-to-end user experience. The new BigQuery data canvas provides a reimagined natural language-based experience for data exploration, curation, wrangling, analysis, and visualization, allowing you to explore and scaffold your data journeys in a graphical workflow that mirrors your mental model. For example, to analyze a recent marketing campaign, you can use simple natural language prompts to discover campaign data sources, integrate with existing customer data, derive insights, and share visual reports with executives — all within a single experience. Watch this video for a quick overview of BigQuery data canvas. BigQuery data canvas allows you to explore and analyze datasets, and create a customized visualization, all using natural language prompts within the same interface Enhance productivity with SQL and Python code assistance Even advanced users sometimes struggle to remember all the details of SQL or Python syntax, and navigating through numerous tables, columns, and relationships can be daunting. Gemini in BigQuery helps you write and edit SQL or Python code using simple natural language prompts, referencing relevant schemas and metadata. You can also leverage BigQuery’s in-console chat interface to explore tutorials, documentation and best practices for specific tasks using simple prompts such as: “How can I use BigQuery materialized views?” “How do I ingest JSON data?” and “How can I improve query performance?” Optimize analytics for performance and speed With growing data volumes, analytics practitioners including data administrators, find it increasingly challenging to effectively manage capacity and enhance query performance. We are introducing recommendations that can help continuously improve query performance, minimize errors and optimize your platform costs. With these recommendations, you can identify materialized views that can be created or deleted based on your query patterns and partition or cluster of your tables. Additionally, you can autotune Spark pipelines and troubleshoot failures and performance issues. Get started To learn more about Gemini in BigQuery, watch this short overview video and refer to the documentation , and sign up to get early access to the preview features. If you’re at Next ‘24, join our data and analytics breakout sessions and stop by at the demo stations to explore further and see these capabilities in action. Pricing details for Gemini in BigQuery will be shared when generally available to all customers. View the full article
-
- google gemini
- google bigquery
-
(and 2 more)
Tagged with:
-
Hello from sunny Las Vegas, where we kicked off Google Cloud Next ’24 today! What happened in Vegas on the first day of Next? In a word, AI. Read on for highlights of the day. We started things off with a reminder of how companies are using AI — not as a future thing, but a today thing. New way, today Google and Alphabet CEO Sundar Pichai then joined the Next keynote on screen, reminding us how far we’ve come with generative AI — and how fast. “Last summer, we were just beginning to imagine how this technology could transform businesses, and today, that transformation is well underway,” Sundar said. On the keynote stage, Thomas Kurian and executives like Amin Vahdat, Aparna Pappu, and Brad Calder highlighted some of the biggest, most recognizable brands today: Goldman Sachs, Mercedes, Uber, Walmart, and Wayfair, to name a few. And throughout, they announced some incredible new products, partners, and technologies. Here’s just a small taste of all the things that we announced today, across four key themes: 1. Use AI to do amazing things To quote Sundar, we’re using AI to build products that are “radically more helpful.” Whether you’re a developer creating the next great app, an architect building and managing infrastructure, an end user collaborating on content with your colleagues, or a data scientist plumbing the depths of your business data, here are just a few of the new ways that Google AI can help you do your job better. What we announced: Gemini for Google Cloud help users build AI agents to work and code more efficiently, manage their applications, gain deeper data insights, identify and resolve security threats — all deeply integrated in a range of Google Cloud offerings: Gemini Code Assist, the evolution of the Duet AI for Developers, lets developers use natural language to add to, change, analyze, and streamline their code, across their private codebases and from their favorite integrated development environments (IDEs). Gemini Cloud Assist helps cloud teams design, operate, and optimize their application lifecycle. Gemini in Security, Gemini in Databases, Gemini in BigQuery, and Gemini in Looker help elevate teams’ skills and capabilities across these critical workloads. Then there’s Google Workspace, which we built around core tenets of real-time creation and collaboration. Today, we took that up a level with: Google Vids, your AI-powered video, writing, production, and editing assistant, all in one. Sitting alongside other productivity tools like Docs, Sheets, and Slides, Vids can help anyone become a great storyteller at work. A new AI Meetings and Messaging add-on includes features like Take notes for me (now in preview), Translate for me (coming in June), and automatic translation of messages and on-demand conversation summaries in Google Chat. This add-on is available for $10 per user, per month, and it can be added to most Workspace plans. 2. … built on the most advanced foundation models All the above-mentioned capabilities are based on, you guessed it, Gemini, Google’s most powerful model. Today at Next ’24, we announced ways to help developers bring the power of Gemini to their own applications through Vertex AI and other AI development platforms. What we announced: Gemini 1.5 Pro is now available in public preview to Vertex AI customers. Now, developers can see for themselves what it means to build with a 1M context window. Imagen, Google’s text-to-image mode, can now create live images from text, in preview. Just imagine generating animated images such as GIFs from a simple text prompt… Imagen also gets advanced photo editing features, including inpainting and outpainting, and a digital watermarking feature powered by Google DeepMind’s SynthID. Vertex AI has new MLOps capabilities: Vertex AI Prompt Management, and new Evaluation tools. Vertex AI Agent Builder brings together the Vertex AI Search and Conversation products, along with a number of enhanced tools for developers to make it much easier to build and deploy enterprise-ready gen AI experiences. 3. … running on AI-optimized infrastructure None of this would be possible if it weren’t for the investments in workload-optimized infrastructure that we make to power our own systems, as well as yours. What we announced: Enhancements across our AI Hypercomputer architecture, including the general availability of Cloud TPU v5p, and A3 Mega VMs powered by NVIDIA H100 Tensor Core GPUs; storage portfolio optimizations including Hyperdisk ML; and open software advancements including JetStream and JAX and PyTorch/XLA releases. Google Axion, our first custom Arm®-based CPU designed for the data center. New compute and networking capabilities, including C4 and N4 general-purpose VMs powered by 5th Generation Intel Xeon processors; as well as enhancements across our Google Distributed Cloud. 4. … all grounded on trusted data An AI model can only be as good as the data it’s trained on. And where better to get quality data than in your trusted enterprise databases and data warehouses? At Google Cloud, we think of this as “enterprise truth” and build capabilities across our Data Cloud to make sure your AI applications are based on trusted data. What we announced: Databases enhancements: Google Cloud databases are more AI-capable than ever. AlloyDB AI includes new vector capabilities, easier access to remote models, and flexible natural language support. Firestore, meanwhile, joins the long list of Google databases with strong vector search capabilities. Big news for BigQuery: We’re anchoring on BigQuery as our unified data analytics platform, including across clouds via BigQuery Omni. BigQuery also gets a new data canvas — a new natural language-based experience for data exploration, curation, wrangling, analysis, and visualization workflows. Last but not least, you can now ground your models in not only in your enterprise data, but also in Google Search results, so they have access to the latest, high-quality information And that was just during the morning keynote! From there, attendees went off to explore more than 300 Spotlights and breakout sessions, the Innovators Hive, and of course, the show floor, where partners this week are highlighting over 100 AI solutions built with Google Cloud technologies. We can’t wait to see you again tomorrow, when we’ll share even more news, go deep on today’s announcements, and host the perennial favorite — the Developer Keynote. Have fun in Vegas tonight. But don’t stay out too late, because there’s lots more ahead tomorrow! View the full article
-
Gemini is lining up to become an even bigger part of the Android ecosystem as a toggle switch for the AI may soon appear on the official Google app. Evidence of this update was discovered in a recent beta by industry insider AssembleDebug who then shared his findings with news site Pianika Web. The feature could appear as a toggle switch right above the search bar. Flipping the switch causes the standard Search interface to morph into the Gemini interface where you can enter a prompt, talk to the model, or upload an image. According to Android Authority, turning on the AI launches a window asking permission to make the switch, assuming you haven't already. If this sounds familiar, that’s because the Google app on iOS has had the same function since early February. Activating the feature on either operating system has Gemini replace Google Assistant as your go-to helper on the internet. Gemini's new role You can hop between the two at any time. It’s not a permanent fixture or anything – at least not right now. Google has been making its AI more prominent on smartphones and its first-party platforms. Recently, hints emerged of Gemini possibly gaining a summarization tool as well as reply suggestions on Gmail. It is possible to have the Gemini toggle switch appear on your Android phone. AssembleDebug published a step-by-step guide on TheSpAndroid, however, the process will take you a long time. First, you’ll need a rooted smartphone running at least Android 12 which is a complicated process in of itself. We have a guide explaining how to root your mobile device if you're interested in checking that out. Then you’ll need the latest Google App beta from the Play Store, the GMS Flags app from GitHub, and Gemini on your device. Even if you follow all of these instructions, there’s still a chance it may not work, so you’re probably better off waiting for the switch to officially roll out. No word on when that’ll happen. Although we could see the feature make its official debut during next month’s Google I/O 2024 event. The tech giant is cooking up something big and we can’t wait to see what it is. While you wait, check out TechRadar's list of the best Android phones for 2024. You might also like Gemini Nano will indeed roll out to Pixel 8 despite claims of "hardware limitations"This is what Gemini AI in Google Messages may look likeGoogle’s ‘affordable’ Pixel 8a may not be so affordable after all View the full article
-
- google search
- android
-
(and 1 more)
Tagged with:
-
We know that the Gemini generative AI chatbot is heading to Google Messages very soon – Google told us so last month – and we've now got some newly published screenshots that give us a preview of how the feature will look and function. These screenshots come courtesy of some code digging by the team at TheSpAndroid (via Android Authority), and we can see Gemini generating images, pulling up details from Google Maps, and suggesting snippets of code. It also seems as though it will pull up details from a connected Gmail account, and a Google account is required to use the chatbot – it won't work without one. It doesn't appear that the AI bot is configured to work in group chats either. The report states that Gemini in Google Messages doesn't seem to be able to analyze images and respond to prompts about them, though in a statement to Android Authority, Google said this functionality is included. Limited availability Image generation will be available (Image credit: TheSpAndroid) There are no real surprises here, as Gemini in Google Messages looks to work much as it does on the web and in the Android app. However, it's interesting to see the technology built into the default messaging app for Google's mobile operating system. It's not clear when we'll all be able to test this for ourselves, though based on this latest leak, it shouldn't be much longer. There's already an official support page up for the feature explaining how to start interacting with the chatbot. At least to begin with, the feature will be limited to certain Android handsets: the Google Pixel 6 (or a later Pixel phone), the Google Pixel Fold, the Samsung Galaxy S22 (or a later Samsung Galaxy phone), or any Samsung Galaxy Z Flip or Galaxy Z Fold. All of this AI technology is moving forward at a rapid pace, and we should hear more about Gemini at Google I/O 2024 on May 14. We can expect a lot of AI talk from Apple too, at its own WWDC 2024 event, which should be happening sometime in June. You might also like Everything you need to know about Google GeminiHands-on with the new Google Gemini assistantThis could be the next Gemini AI trick for Android View the full article
-
Everything you need to know to get started with Bard, Google’s experimental conversational AI chatbot.View the full article
-
- 1
-
- google bard
-
(and 3 more)
Tagged with:
-
Forum Statistics
67.4k
Total Topics65.3k
Total Posts