Visualistan: Big Tech -->

    Social Items



Google is testing a new AI bot that makes conversations with people using their speech as input. The model was developed for the purpose of helping people practise a new language.

 

Google rolled out this conversational AI bot back in October of last year, but it was underdeveloped at the time as its capability was limited to providing feedback on spoken sentences. In the latest updated version, the bot can now help users practice having ongoing conversations in the language they are learning; although it currently only supports conversations in English.

Google Brings Out its Conversational AI Bot in Search Labs



As its latest major move in the world of AI, tech giant Meta has added its AI chatbot on all of its platforms: Facebook, Instagram, WhatsApp and Messenger (minus Threads). The chatbot is available on each of these apps as a dedicated search bar.

 

Meta’s AI chatbot is powered by the company’s Llama 3 model – Meta’s most powerful AI model yet (even better than ChatGPT), and which it claims to be “the most intelligent AI assistant” that can be used for free. This should automatically make Meta’s integrated AI chatbot one of the best AI assistants out there too.

 

Meta’s Advanced AI Assistant Makes its Way to Facebook, Instagram, WhatsApp, and Messenger



Microsoft is releasing Copilot for OneDrive in April and has recently revealed in a new blog post how the AI integration will work for its file hosting platform. From finding information to summarizing and extracting it from an extensive range of files, Copilot will basically function as your research assistant bot in OneDrive.

 

The files that Copilot can work with include text documents, presentations, spreadsheets, HTML pages, PDFs, and more. In addition to generating summaries, the bot can also customize them as per the user’s command, such as only adding key points or highlights from a select part.

Copilot for OneDrive can Extract and Summarize Data from a Wide Range of Files



Samsung has expanded support for its range of audio technologies Auracast, 360 Audio, and Auto Switch. While these technologies have been accessible on Samsung’s devices, the blog post explains the extent to which they will be supported across the lineup. According to Samsung, the updates will begin to be introduced starting with the Galaxy Buds 2 and Buds FE from the end of February, and then to the Buds 2 Pro.

 

Auracast, an industry-wide and fairly new Bluetooth technology, allows a device to broadcast an audio stream to an unlimited number of endpoints like speakers and headphones. Auracast was first launched to Samsung’s Galaxy Buds 2 Pro earbuds and its latest high-end TVs last year, following the Galaxy S24 series in January.

Samsung Announces an Expanded List of Devices that Support its Different Audio Technologies



A report from Bloomberg states that Apple has advanced the internal testing of new generative AI integrations for its Xcode programming software, and will be making them available to third-party developers this year. 

 

Additionally, Apple is reported to be exploring generative AI in consumer-facing products; such as automatic playlists in Apple Music, slideshows in Keynote, and AI chatbot / search features for Spotlight search.

 

Apple’s revamped AI-powered code completion tool is similar to Microsoft’s GitHub Copilot, according to Bloomberg’s report. It applies a large language model (LLM) to predict and complete code strings, as well as generate code to test apps.

Apple is Reportedly Furthering Development of its AI-powered Code Completion Tool



Apple’s research department is pitching a prototype of a new generative AI animation tool ‘Keyframer,’ that enables adding motion to 2D images with prompts.

 

Apple is keen on exploring large language models (LLMs) in animation for their potential, just like in text and image generation. Earlier, Apple introduced Human Gaussian Splats (HUGS) that creates animation-ready human avatars from video clips, and MGIE that edits images using text prompts, among its latest generative AI projects

 

In a research paper that the company published last week, it explains that Keyframer is powered by OpenAI’s GPT4 model and collects data in the form of Scalable Vector Graphic (SVG) files. It then produces a CSS code that animates the image based on a text-based prompt. These prompts can be anything that describe how the animation must look like, e.g. “make the frog jump.”

 

Apple Introduces New Generative AI innovation ‘Keyframer’ that Animates Images



Google is merging its AI products Bard and Duet into one product called Gemini. There is now a Gemini app for Android where the Bard chatbot and all Duet AI features in Google Workspace are available. In addition to that, Google’s largest and most efficient version of its large language model, the Gemini Ultra 1.0, is being released to the public.

 

Downloading the Gemini app will set Gemini (previously Bard) as your default assistant, replacing Google Assistant when you say, “Hey Google” or long-press the home button.  According to Sissie Hsiao, who runs Gemini at Google, the AI assistant is “conversational, multimodal, and more helpful than ever before.” Mostly, the changes that have been made to Bard are limited to the act of rebranding, so both the chatbot and AI features for Workspace will feel the same way that they previously have.


Gemini will work both as an AI assistant and chatbot, and could even be used in place of Search, as Google has added a toggle at the top of the app that lets you switch from Search to Gemini. This proves just how much faith Google has put into Gemini as it seems to consider it equally important as Search, which has been the most important product of the company.

Google Rebrands its Bard Assistant and Duet Features to a Single AI Product: ‘Gemini’



Microsoft’s AI ventures, that started with Bing, are only getting bigger and better with time. From AI in Office apps, to a dedicated AI key for laptops, Microsoft has integrated the technology in just about everything that it owns.

 

In the AI domain, Microsoft has now shifted its focus off of Bing after being met with a lack of success that it had been anticipating for the platform. In place of Bing, the company has been giving the limelight to Copilot for some time now - the AI assistant that is now a part of Microsoft’s almost every key software and service.

 


In one of its latest efforts to make Copilot stand out, Microsoft has launched a new Super Bowl commercial for Copilot that is set to air on Sunday. The meaning incorporated into the commercial revolves around what makes Copilot special, AI’s creative solutions, and the stories of gamers with disabilities.

Microsoft Launches Superbowl Ad for Copilot and Several New AI Features



A latest feature in Google’s Bard chatbot is allowing users to receive responses to their questions in real time, meaning that you can see the answers as they are being generated. Previously, the answers could be viewed only once they were complete.

 

The real-time response generation option can also be turned off. Users can choose from ‘respond in real time’ and ‘respond when complete’ options from the icon in the top-right corner of Bard’s window. 

 

In addition to that, users will be able to cut off Bard while it is generating a response, if they wish to, with the help of a ‘skip response’ option. This would be useful when a user wants to type in another question without having to wait to receive the whole answer.

Google Brings New Features to Bard, Including Real-Time Generation of Responses



Earlier this year, Google announced a new AI project called Project Tailwind, which it described as an AI notebook where people could interact with their notes and train the model based on their document entries. The project was later renamed to ‘NotebookLM’ and is currently available as a prototype.

 

While NotebookLM works as a standalone app for now, it could likely be later added as a Google Docs or Drive feature where it can assess and read users’ files.

 

To use NotebookLM, you first need to create a new project. The app then prompts you to add sources, based on which it generates a ‘Source Guide’ - a summary of the entire document, along with some key topics and relevant questions to ask. Since the app is still in its early stages of development, it accepts only up to five sources, with each source no more than 10,000 words long, otherwise the app stops responding.




Get to Know Google’s New AI Research Tool that you Can Train on your Personal Docs



A new report suggests that Meta’s Next-Gen Ray Ban Stories Glasses will be updated with features specifically designed for live stream creators. Streamers will be able to use these camera-equipped glasses to stream video directly to Facebook and/or Instagram. The glasses will also enable them to communicate with viewers during the livestream.

 

To relay comments in an automated voice over the built-in headphones would be an impressive functionality and a big attraction for streaming stars. More and more creators are becoming interested in streaming videos on social media platforms, which only increases the likelihood for Meta’s success with the device.

Meta is Reportedly Working on Adding Livestreaming Functions to the Next-Gen Ray Ban Stories Glasses



In commemoration of the Apple Watch’s 10th  debut anniversary, Apple is reportedly working on a major redesign for the upcoming Apple Watch X. The device, however, is not expected to come out this year at least.

 

As for the next line-up of the Apple Watch series 9 arriving this September alongside the new iPhone, there will be a minor upgrade, possibly a faster processor. For every annual launch of a new Apple Watch, Apple has strictly persisted to introducing minor upgrades only, except when it launched the Apple Watch Ultra.

Apple is Reportedly Preparing a Major Redesign for the Apple Watch

Subscribe Our Newsletter