Search the Community
Showing results for tags 'chatbots'.
-
After officially entering the AI game in September 2023, Meta just scaled up its AI chatbot experiment. Some WhatsApp users have been able to play around with the company's new AI assistant for a while now, and Meta's AI upgrade was first introduced in beta in November last year. More functionalities appeared on users' search bars later in March. However, the trial was restricted to people in the US in a limited capacity. Now, people in India and parts of Africa have spotted Meta AI on WhatsApp. Speaking to TechCrunch, the company confirmed that it plans to expand its AI trails to more users worldwide and integrate the AI chatbot into Facebook Messenger and Instagram, too. More platforms, more users "Our generative AI-powered experiences are under development in varying phases, and we’re testing a range of them publicly in a limited capacity," a Meta spokesperson told TechCrunch. The move perfectly illustrates the company's will to compete with AI's bigger players, most notably OpenAI and its ChatGPT-powered tools. What's more, India is the country worldwide with the most Facebook and WhatsApp users. WhatsApp monthly usage is also reportedly high in African countries, such as Nigeria, South Africa, and Kenya. To check if you're a chosen one, you should update your WhatsApp for iOS or Android app to the latest version directly from the official app store. Meta AI will appear for some selected users who have their app set to English on a rolling basis. Meta starts limited testing of Meta AI on WhatsApp in different countries!Some users in specific countries can now experiment with the Meta AI chatbot, exploring its capabilities and functionalities through different entry points.https://t.co/PrycA4o0LI pic.twitter.com/BB2axOGnEjApril 12, 2024 See more Designed to reply to users' queries and generate images from text prompts, the Meta AI chatbot is also landing on Facebook Messenger and Instagram on a limited capacity across the US, India, and a few more selected countries. On Instagram, the plan is also to use the feature for search queries—TechCrunch reported. These signs of Meta AI expansion aren't happening in a vacuum, either. A few days back, the company announced plans to release AI models with "human-level cognition" capabilities. "We are hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan . . . to have memory," Joelle Pineau, the vice president of AI research at Meta, told the Financial Times when announcing the new Llama 3 model. The choice is now yours if you want to help Meta accelerate its work towards an even more powerful AI—but do we all want that, really?—or, remain a silent and skeptical spectator. View the full article
-
The way that users get information from the web has evolved over the years. People used to rely on news sites and Google to keep abreast of what was going on in the world, but then Twitter arrived and cemented itself as an alternative (and often inaccurate) source of news. Although it’s facing the threat of being banned in the US, TikTok has become a major source of information for younger users, and AI chatbots have really come into their own as a valuable tool for delivering tailored, instant information. The rise of voice-activated AI assistants like Amazon's Alexa and Google Assistant has also revolutionized the way we access information, allowing users to simply ask for what they want to know, rather than having to search for it manually. However, with this evolution comes the responsibility of discerning reliable sources from misinformation, a skill that is becoming increasingly important in the AI age. Recent surveys by Applause and Forrester indicate a significant shift in consumer behavior, with users increasingly favoring AI chatbots over traditional search engines for both research and basic queries. Similar findings Applause's 2024 Generative AI Survey reveals that 91% of respondents use chatbots for research, and 81% prefer them over search engines for basic queries. However, as is perhaps to be expected, concerns about data privacy, bias, and performance persist. Applause found ChatGPT is the most popular chatbot, used by 91% of users, ahead of Google Gemini (63%) and Microsoft Copilot (55%). Despite worries about providing private information to chatbots, with 89% of respondents expressing concern, the practical applications of Gen AI are now widely acknowledged. However, only 19% of users believe that chatbots understand their prompts every time, indicating room for improvement. Forrester's State of Consumer Usage of Generative AI 2024 echoes these findings, noting that GenAI has made AI more visible in consumers' daily lives. While companies race to incorporate AI, consumer adoption is still in its infancy due to concerns about its ethical implications. The report also highlights the demographic differences in GenAI adoption, with younger, male, and more highly educated consumers more likely to have used the technology. The report states that almost half of Millennial and Gen Z adults in the US, UK and France have used GenAI, compared with only 12% of Baby Boomers. Forrester also found 34% of US consumers used GenAI, compared to 27% in the UK and 25% in France. Work still needed Despite widespread concerns, the benefit of GenAI is widely recognized. Among online adults who had heard of GenAI, 50% agreed that it would make it easier to find information online. However, 45% agreed that GenAI posed a serious threat to society, indicating a split in consumer attitudes towards the technology. The surveys reveal that the golden era of search engines might be coming to an end, as consumers increasingly turn towards AI chatbots for their information needs. However, as Chris Sheehan, SVP Strategic Accounts and AI at Applause sums up, “Chatbots are getting better at dealing with toxicity, bias and inaccuracy – however, concerns still remain. Not surprisingly, switching between chatbots to accomplish different tasks is common, while multimodal capabilities are now table stakes. To gain further adoption, chatbots need to continue to train models on quality data in specific domains and thoroughly test across a diverse user base to drive down toxicity and inaccuracy.” More from TechRadar Pro These are the best AI chatbots for businesses around todayGoogle's non-profit arm launches AI accelerator to fund the next big thingNew search engines fueled by Generative AI will compete with Google View the full article
-
An AI chatbot released by the New York City government designed to assist business owners in accessing information has come under scrutiny for sharing inaccurate and misleading guidance. A report by The Markup, co-published with local nonprofit newsrooms Documented and The City, reveal multiple instances where the chatbot provided wrong advice about legal obligations. For example, the AI chatbot claimed that bosses could accept workers’ tips and that landlords are allowed to discriminate based on source of income – both wrong pieces of advice. Chatbot fail? Launched in October 2023 by Mayor Adams’s administration as an extension of the MyCity portal, the chatbot, described as “a one-stop shop for city services and benefits,” is powered by Microsoft’s Azure services. Despite its intention to serve as a reliable source of information sources directly from the city government’s websites, the pilot program has been found to generate flawed responses. One example given by The Markup sees the chatbot asserting that businesses could operate as cashless establishments, despite the New York City’s 2020 ban on such practices. Responding to the report, Leslie Brown, spokesperson for the NYC Office of Technology and Innovation, acknowledged the chatbot’s imperfections, emphasizing ongoing efforts to refine the AI tool: “In line with the city’s key principles of reliability and transparency around AI, the site informs users the clearly marked pilot beta product should only be used for business-related content, tells users there are potential risks, and encourages them via disclaimer to both double-check its responses with the provided links and not use them as a substitute for professional advice.” After a months-long honeymoon period, the cracks are beginning to show as businesses and government agencies start to question the reliability, safety and security of artificial intelligence, with many imposing bans and others introducing strict regulations. More from TechRadar Pro EU passes landmark AI act, paving the way for greater AI regulationDraft your best work with the help of the best AI writersCheck out the best cloud hosting providers View the full article
-
Videos are full of valuable information, but tools are often needed to help find it. From educational institutions seeking to analyze lectures and tutorials to businesses aiming to understand customer sentiment in video reviews, transcribing and understanding video content is crucial for informed decision-making and innovation. Recently, advancements in AI/ML technologies have made this task more accessible than ever. Developing GenAI technologies with Docker opens up endless possibilities for unlocking insights from video content. By leveraging transcription, embeddings, and large language models (LLMs), organizations can gain deeper understanding and make informed decisions using diverse and raw data such as videos. In this article, we’ll dive into a video transcription and chat project that leverages the GenAI Stack, along with seamless integration provided by Docker, to streamline video content processing and understanding. High-level architecture The application’s architecture is designed to facilitate efficient processing and analysis of video content, leveraging cutting-edge AI technologies and containerization for scalability and flexibility. Figure 1 shows an overview of the architecture, which uses Pinecone to store and retrieve the embeddings of video transcriptions. Figure 1: Schematic diagram outlining a two-component system for processing and interacting with video data. The application’s high-level service architecture includes the following: yt-whisper: A local service, run by Docker Compose, that interacts with the remote OpenAI and Pinecone services. Whisper is an automatic speech recognition (ASR) system developed by OpenAI, representing a significant milestone in AI-driven speech processing. Trained on an extensive dataset of 680,000 hours of multilingual and multitask supervised data sourced from the web, Whisper demonstrates remarkable robustness and accuracy in English speech recognition. Dockerbot: A local service, run by Docker Compose, that interacts with the remote OpenAI and Pinecone services. The service takes the question of a user, computes a corresponding embedding, and then finds the most relevant transcriptions in the video knowledge database. The transcriptions are then presented to an LLM, which takes the transcriptions and the question and tries to provide an answer based on this information. OpenAI: The OpenAI API provides an LLM service, which is known for its cutting-edge AI and machine learning technologies. In this application, OpenAI’s technology is used to generate transcriptions from audio (using the Whisper model) and to create embeddings for text data, as well as to generate responses to user queries (using GPT and chat completions). Pinecone: A vector database service optimized for similarity search, used for building and deploying large-scale vector search applications. In this application, Pinecone is employed to store and retrieve the embeddings of video transcriptions, enabling efficient and relevant search functionality within the application based on user queries. Getting started To get started, complete the following steps: Create an OpenAI API Key. Ensure that you have a Pinecone API Key. Ensure that you have installed the latest version of Docker Desktop. The application is a chatbot that can answer questions from a video. Additionally, it provides timestamps from the video that can help you find the sources used to answer your question. Clone the repository The next step is to clone the repository: git clone https://github.com/dockersamples/docker-genai.git The project contains the following directories and files: ├── docker-genai/ │ ├── docker-bot/ │ ├── yt-whisper/ │ ├── .env.example │ ├── .gitignore │ ├── LICENSE │ ├── README.md │ └── docker-compose.yaml Specify your API keys In the /docker-genai directory, create a text file called .env, and specify your API keys inside. The following snippet shows the contents of the .env.example file that you can refer to as an example. #------------------------------------------------------------- # OpenAI #------------------------------------------------------------- OPENAI_TOKEN=your-api-key # Replace your-api-key with your personal API key #------------------------------------------------------------- # Pinecone #-------------------------------------------------------------- PINECONE_TOKEN=your-api-key # Replace your-api-key with your personal API key Build and run the application In a terminal, change directory to your docker-genai directory and run the following command: docker compose up --build Next, Docker Compose builds and runs the application based on the services defined in the docker-compose.yaml file. When the application is running, you’ll see the logs of two services in the terminal. In the logs, you’ll see the services are exposed on ports 8503 and 8504. The two services are complementary to each other. The yt-whisper service is running on port 8503. This service feeds the Pinecone database with videos that you want to archive in your knowledge database. The next section explores the yt-whisper service. Using yt-whisper The yt-whisper service is a YouTube video processing service that uses the OpenAI Whisper model to generate transcriptions of videos and stores them in a Pinecone database. The following steps outline how to use the service. Open a browser and access the yt-whisper service at http://localhost:8503. Once the application appears, specify a YouTube video URL in the URL field and select Submit. The example shown in Figure 2 uses a video from David Cardozo. Figure 2: A web interface showcasing processed video content with a feature to download transcriptions. Submitting a video The yt-whisper service downloads the audio of the video, then uses Whisper to transcribe it into a WebVTT (*.vtt) format (which you can download). Next, it uses the “text-embedding-3-small” model to create embeddings and finally uploads those embeddings into the Pinecone database. After the video is processed, a video list appears in the web app that informs you which videos have been indexed in Pinecone. It also provides a button to download the transcript. Accessing Dockerbot chat service You can now access the Dockerbot chat service on port 8504 and ask questions about the videos as shown in Figure 3. Figure 3: Example of a user asking Dockerbot about NVIDIA containers and the application giving a response with links to specific timestamps in the video. Conclusion In this article, we explored the exciting potential of GenAI technologies combined with Docker for unlocking valuable insights from video content. It shows how the integration of cutting-edge AI models like Whisper, coupled with efficient database solutions like Pinecone, empowers organizations to transform raw video data into actionable knowledge. Whether you’re an experienced developer or just starting to explore the world of AI, the provided resources and code make it simple to embark on your own video-understanding projects. Learn more Accelerated AI/ML with Docker Build and run natural language processing (NLP) applications with Docker Video transcription and chat using GenAI Stack PDF analysis and chat using GenAI Stack Subscribe to the Docker Newsletter. Have questions? The Docker community is here to help. View the full article
-
- video analysis
- transcription
-
(and 2 more)
Tagged with:
-
Bloomberg's Mark Gurman today reported that Apple is not planning to debut its own generative AI chatbot with its next major software updates, including iOS 18 for the iPhone. Instead, he reiterated that Apple has held discussions with companies such as Google, OpenAI, and Baidu about potential generative AI partnerships. Recent reports indicated that Apple has considered licensing existing chatbots, such as Google's Gemini and OpenAI's ChatGPT, but Apple offering its own chatbot of some kind on iOS 18 had not been explicitly ruled out until now. Gurman still expects AI to be a major focus at Apple's just-announced WWDC 2024 developers conference. He reiterated that Apple plans to announce new AI features that "assist users in their daily lives," but he did not provide any specific details. He has previously reported that generative AI will improve Siri's ability to answer more complex questions, and allow the Messages app to auto-complete sentences. Other apps like Apple Music, Shortcuts, Pages, Numbers, and Keynote are also expected to gain generative AI functionality. Apple already promised that the company would share generative AI announcements later this year, and the company hinted at it again today. WWDC 2024 runs from June 10 through June 14, with video sessions to be shared on YouTube for the first time. The first iOS 18 beta should be made available to developers following the WWDC keynote, and the update is expected to be released to all users in September.Related Roundup: iOS 18Tag: Mark Gurman This article, "iOS 18 Reportedly Won't Feature Apple's Own ChatGPT-Like Chatbot" first appeared on MacRumors.com Discuss this article in our forums View the full article
-
WhatsApp is slated to receive a pair of AI-powered upgrades aiming to help people answer tough questions on the fly, as well as edit images on the platform. Starting with answering questions, the upgrade integrates one of Meta’s AI models into the WhatsApp search bar. Doing so, according to WABetaInfo, would allow users to directly input queries without having to create a separate chat room for the AI. You'd be able to hold a quick conversation right on the same page. It appears this is an extension of the in-app assistants that originally came out back in November 2023. A screenshot in the report reveals WhatsApp will provide a handful of prompts to get a conversation flowing. It’s unknown just how capable the search bar AI will be. The assistants are available in different personas specializing in certain topics. But looking at the aforementioned screenshot, it appears the search bar will house the basic Meta AI model. It would be really fun if we could assign the Snoop Dogg persona as the main assistant. WhatsApp beta for Android 2.24.7.14: what's new?WhatsApp is working on a feature to ask queries to Meta AI, and it will be available in a future update!https://t.co/qSqJ9JobbK pic.twitter.com/mKM9PLCF3VMarch 23, 2024 See more AI image editing The second update is a collection of image editing features discovered by industry expert AssembleDebug, after diving into a recent WhatsApp beta. AssembleDebug discovered three possibly upcoming tools – Backdrop, Restyle, and Expand. It’s unknown exactly what they do as not a single one works. However the first two share a name with other features currently available on Instagram, so they may, in fact, function the same way. Backdrop could let users change the background of an image into something different via text prompt. Restyle can completely alter the art style of an uploaded picture. Think of these like filters, but more capable. You can make a photograph into a watercolor painting or pixel art. It’s even possible to create wholly unique content through a text prompt. (Image credit: AssembleDebug/TheSPAndroid) Expand is the new kid on the block. Judging by the name, AssembleDebug theorizes it’ll harness the power of AI “to expand images beyond their visible area”. Technology like this already exists on other platforms. Photoshop, for example, has Generative Expand, and Samsung's Galaxy S24 series can expand images after they have been adjusted by rotation. WhatsApp gaining such an ability would be a great inclusion as it’ll give users a robust editing tool that is free. Most versions of this tech are locked behind a subscription or tied to a specific device. Do keep in mind neither beta is available to early testers at the time of this writing. They're still in the works, and as stated earlier, we don’t know the full capabilities of either set. Regardless of their current status, it is great to see that one day WhatsApp may come equipped with AI tech on the same level as what you’d find on Instagram especially when it comes to the search bar assistant. The update will make accessing that side of Meta software more convenient for everyone. If you prefer to tweak images on a desktop, check out TechRadar's list of the best free photo editors for PC and Mac. You might also like WhatsApp's new security label will let you know if future third-party chats are safeWhatsApp for Android is making it much easier to find older messagesHow to transfer WhatsApp chats to a new phone View the full article
-
Today, AWS announces the Bedrock GenAI chatbot blueprint in Amazon CodeCatalyst. CodeCatalyst customers can use this blueprint to quickly build and launch a generative AI chatbot with Amazon Bedrock and Anthropic’s Claude. This blueprint helps development teams build and deploy their own secure, login-protected LLM playground that can be customized to their data. You can get started by creating a project in CodeCatalyst. For more information, see the CodeCatalyst documentation and the Bedrock GenAI Chatbot documentation. View the full article
-
- amazon codecatalyst
- bedrock
-
(and 2 more)
Tagged with:
-
Amazon Web Services (AWS) has become the latest technology giant to unveil a workplace assistant powered by generative AI. Revealed by CEO Adam Selipsky at its AWS re:Invent 2023 event in Las Vegas, Amazon Q is designed specifically for business use, allowing employees to ask complex questions about their specific work tasks and get detailed responses. The company believes Amazon Q can help workers at every level, in roles from developers to marketing to call center staff, save huge amounts of time and stress by providing exactly the assistance needed. Amazon Q Speaking at his AWS re:Invent keynote, Selipsky noted that Amazon Q will also be "your business expert,” learning about your business as you connect to it. The platform connects to over 40 popular tools such as Salesforce, Gmail, Slack, Atlassian and Microsoft 365, taking in and indexing all this connected data and content, as well as identifying business-specific aspects such as its organizational structure and even product names. (Image credit: AWS) Trained on 17 years of AWS knowledge, Amazon Q can be specifically tailored and customized to the precise tasks you encounter at work, with conversational prompts and questions resulting in detailed answers provided in nearly real-time. Selipsky noted that developers and IT workers in particular are being tasked with keeping pace with the rapidly changing and evolving technology industry, with generative AI moving faster than ever before. This innovation may be great for some, but ordinary workers are often catching the brunt of the demand for new features and upgrades, meaning they find it hard to balance new demands with existing workloads. "I really believe this is going to be transformative," Selipsky said of Amazon Q, "This is just the start of how we're going to revolutionize the future of work." The Q Architect service can also help you research, troubleshoot and analyze issues across your business, saving time and stress, and provide suggestions on optimizing AWS infrastructure queries, such as EC2 instances. Amazon Q is also going to be in the IDE for developers, working alongside CodeWhisperer for further code suggestions, helping reduce "hours of work...it'll do all the heavy lifting for you," Selipsky says. Amazon Q will be available today in preview for existing AWS customers, with a wider release coming sometime in the future. More from TechRadar Pro Looking for a bit more capacity? Here's the best cloud storage choicesAnd we've also rounded up the best cloud hosting providersAmazon wants to train millions of people in basic AI skills View the full article
-
This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS! A new week starts, and Spring is almost here! If you’re curious about AWS news from the previous seven days, I got you covered. Last Week’s Launches Here are the launches that got my attention last week: Amazon S3 – Last week there was AWS Pi Day 2023 celebrating 17 years of innovation since Amazon S3 was introduced on March 14, 2006. For the occasion, the team released many new capabilities: S3 Object Lambda now provides aliases that are interchangeable with bucket names and can be used with Amazon CloudFront to tailor content for end users. S3 now support datasets that are replicated across multiple AWS accounts with cross-account support for S3 Multi-Region Access Points. You can now create and configure replication rules to automatically replicate S3 objects from one AWS Outpost to another. Amazon S3 has also simplified private connectivity from on-premises networks: with private DNS for S3, on-premises applications can use AWS PrivateLink to access S3 over an interface endpoint, while requests from your in-VPC applications access S3 using gateway endpoints. We released Mountpoint for Amazon S3, a high performance open source file client. Read more in the blog. Note that Mountpoint isn’t a general-purpose networked file system, and comes with some restrictions on file operations. Amazon Linux 2023 – Our new Linux-based operating system is now generally available. Sébastien’s post is full of tips and info. Application Auto Scaling – Now can use arithmetic operations and mathematical functions to customize the metrics used with Target Tracking policies. You can use it to scale based on your own application-specific metrics. Read how it works with Amazon ECS services. AWS Data Exchange for Amazon S3 is now generally available – You can now share and find data files directly from S3 buckets, without the need to create or manage copies of the data. Amazon Neptune – Now offers a graph summary API to help understand important metadata about property graphs (PG) and resource description framework (RDF) graphs. Neptune added support for Slow Query Logs to help identify queries that need performance tuning. Amazon OpenSearch Service – The team introduced security analytics that provides new threat monitoring, detection, and alerting features. The service now supports OpenSearch version 2.5 that adds several new features such as support for Point in Time Search and improvements to observability and geospatial functionality. AWS Lake Formation and Apache Hive on Amazon EMR – Introduced fine-grained access controls that allow data administrators to define and enforce fine-grained table and column level security for customers accessing data via Apache Hive running on Amazon EMR. Amazon EC2 M1 Mac Instances – You can now update guest environments to a specific or the latest macOS version without having to tear down and recreate the existing macOS environments. AWS Chatbot – Now Integrates With Microsoft Teams to simplify the way you troubleshoot and operate your AWS resources. Amazon GuardDuty RDS Protection for Amazon Aurora – Now generally available to help profile and monitor access activity to Aurora databases in your AWS account without impacting database performance AWS Database Migration Service – Now supports validation to ensure that data is migrated accurately to S3 and can now generate an AWS Glue Data Catalog when migrating to S3. AWS Backup – You can now back up and restore virtual machines running on VMware vSphere 8 and with multiple vNICs. Amazon Kendra – There are new connectors to index documents and search for information across these new content: Confluence Server, Confluence Cloud, Microsoft SharePoint OnPrem, Microsoft SharePoint Cloud. This post shows how to use the Amazon Kendra connector for Microsoft Teams. For a full list of AWS announcements, be sure to keep an eye on the What's New at AWS page. Other AWS News A few more blog posts you might have missed: Women founders Q&A – We’re talking to six women founders and leaders about how they’re making impacts in their communities, industries, and beyond. What you missed at that 2023 IMAGINE: Nonprofit conference – Where hundreds of nonprofit leaders, technologists, and innovators gathered to learn and share how AWS can drive a positive impact for people and the planet. Monitoring load balancers using Amazon CloudWatch anomaly detection alarms – The metrics emitted by load balancers provide crucial and unique insight into service health, service performance, and end-to-end network performance. Extend geospatial queries in Amazon Athena with user-defined functions (UDFs) and AWS Lambda – Using a solution based on Uber’s Hexagonal Hierarchical Spatial Index (H3) to divide the globe into equally-sized hexagons. How cities can use transport data to reduce pollution and increase safety – A guest post by Rikesh Shah, outgoing head of open innovation at Transport for London. For AWS open-source news and updates, here’s the latest newsletter curated by Ricardo to bring you the most recent updates on open-source projects, posts, events, and more. Upcoming AWS Events Here are some opportunities to meet: AWS Public Sector Day 2023 (March 21, London, UK) – An event dedicated to helping public sector organizations use technology to achieve more with less through the current challenging conditions. Women in Tech at Skills Center Arlington (March 23, VA, USA) – Let’s celebrate the history and legacy of women in tech. The AWS Summits season is warming up! You can sign up here to know when registration opens in your area. That’s all from me for this week. Come back next Monday for another Week in Review! — Danilo View the full article
-
- women in tech
- s3
- (and 23 more)
-
AWS Control Tower has updated its Region deny guardrail to include additional AWS global service APIs to assist in retrieving configuration settings, dashboard information, and support for an interactive chat agent. The Region deny guardrail, ‘Deny access to AWS based on the requested AWS Region', assists you in limiting access to AWS services and operations for enrolled accounts in your AWS Control Tower environment. The AWS Control Tower Region deny guardrail helps ensure that any customer data you upload to AWS services is located only in the AWS Regions that you specify. You can select the AWS Region or Regions in which your customer data is stored and processed. View the full article
-
- control tower
- s3
-
(and 4 more)
Tagged with:
-
We are excited to announce general availability of automatic chatbot designer in Amazon Lex, enabling developers to automatically design chatbots from conversation transcripts in hours rather than weeks. Introduced at re:Invent in December 2021, the automated chatbot designer enhances the usability of Amazon Lex by automating conversational design, minimizing developer effort and reducing the time it takes to design a chatbot. View the full article
-
You can now configure your Amazon Lex chatbot to improve engagement with customers who speak French, Spanish, Italian, or Canadian French. Amazon Lex allows you to create intelligent conversational chatbots that can be used with Amazon Connect to automate high volume interactions without compromising customer experience. Customers can perform tasks such as changing a password, requesting a balance on an account, or scheduling an appointment using natural conversational language. Customers can say things like “I need help with my device” instead of having to listen through and remember a list of options like press 1 for sales, or press 2 for appointments. View the full article
-
Forum Statistics
67.6k
Total Topics65.5k
Total Posts