Visualistan: Big Tech -->

    Social Items



Google is testing a new AI bot that makes conversations with people using their speech as input. The model was developed for the purpose of helping people practise a new language.

 

Google rolled out this conversational AI bot back in October of last year, but it was underdeveloped at the time as its capability was limited to providing feedback on spoken sentences. In the latest updated version, the bot can now help users practice having ongoing conversations in the language they are learning; although it currently only supports conversations in English.

Google Brings Out its Conversational AI Bot in Search Labs



As its latest major move in the world of AI, tech giant Meta has added its AI chatbot on all of its platforms: Facebook, Instagram, WhatsApp and Messenger (minus Threads). The chatbot is available on each of these apps as a dedicated search bar.

 

Meta’s AI chatbot is powered by the company’s Llama 3 model – Meta’s most powerful AI model yet (even better than ChatGPT), and which it claims to be “the most intelligent AI assistant” that can be used for free. This should automatically make Meta’s integrated AI chatbot one of the best AI assistants out there too.

 

Meta’s Advanced AI Assistant Makes its Way to Facebook, Instagram, WhatsApp, and Messenger



Microsoft is releasing Copilot for OneDrive in April and has recently revealed in a new blog post how the AI integration will work for its file hosting platform. From finding information to summarizing and extracting it from an extensive range of files, Copilot will basically function as your research assistant bot in OneDrive.

 

The files that Copilot can work with include text documents, presentations, spreadsheets, HTML pages, PDFs, and more. In addition to generating summaries, the bot can also customize them as per the user’s command, such as only adding key points or highlights from a select part.

Copilot for OneDrive can Extract and Summarize Data from a Wide Range of Files



Samsung has expanded support for its range of audio technologies Auracast, 360 Audio, and Auto Switch. While these technologies have been accessible on Samsung’s devices, the blog post explains the extent to which they will be supported across the lineup. According to Samsung, the updates will begin to be introduced starting with the Galaxy Buds 2 and Buds FE from the end of February, and then to the Buds 2 Pro.

 

Auracast, an industry-wide and fairly new Bluetooth technology, allows a device to broadcast an audio stream to an unlimited number of endpoints like speakers and headphones. Auracast was first launched to Samsung’s Galaxy Buds 2 Pro earbuds and its latest high-end TVs last year, following the Galaxy S24 series in January.

Samsung Announces an Expanded List of Devices that Support its Different Audio Technologies



A report from Bloomberg states that Apple has advanced the internal testing of new generative AI integrations for its Xcode programming software, and will be making them available to third-party developers this year. 

 

Additionally, Apple is reported to be exploring generative AI in consumer-facing products; such as automatic playlists in Apple Music, slideshows in Keynote, and AI chatbot / search features for Spotlight search.

 

Apple’s revamped AI-powered code completion tool is similar to Microsoft’s GitHub Copilot, according to Bloomberg’s report. It applies a large language model (LLM) to predict and complete code strings, as well as generate code to test apps.

Apple is Reportedly Furthering Development of its AI-powered Code Completion Tool



Apple’s research department is pitching a prototype of a new generative AI animation tool ‘Keyframer,’ that enables adding motion to 2D images with prompts.

 

Apple is keen on exploring large language models (LLMs) in animation for their potential, just like in text and image generation. Earlier, Apple introduced Human Gaussian Splats (HUGS) that creates animation-ready human avatars from video clips, and MGIE that edits images using text prompts, among its latest generative AI projects

 

In a research paper that the company published last week, it explains that Keyframer is powered by OpenAI’s GPT4 model and collects data in the form of Scalable Vector Graphic (SVG) files. It then produces a CSS code that animates the image based on a text-based prompt. These prompts can be anything that describe how the animation must look like, e.g. “make the frog jump.”

 

Apple Introduces New Generative AI innovation ‘Keyframer’ that Animates Images



Google is merging its AI products Bard and Duet into one product called Gemini. There is now a Gemini app for Android where the Bard chatbot and all Duet AI features in Google Workspace are available. In addition to that, Google’s largest and most efficient version of its large language model, the Gemini Ultra 1.0, is being released to the public.

 

Downloading the Gemini app will set Gemini (previously Bard) as your default assistant, replacing Google Assistant when you say, “Hey Google” or long-press the home button.  According to Sissie Hsiao, who runs Gemini at Google, the AI assistant is “conversational, multimodal, and more helpful than ever before.” Mostly, the changes that have been made to Bard are limited to the act of rebranding, so both the chatbot and AI features for Workspace will feel the same way that they previously have.


Gemini will work both as an AI assistant and chatbot, and could even be used in place of Search, as Google has added a toggle at the top of the app that lets you switch from Search to Gemini. This proves just how much faith Google has put into Gemini as it seems to consider it equally important as Search, which has been the most important product of the company.

Google Rebrands its Bard Assistant and Duet Features to a Single AI Product: ‘Gemini’



Microsoft’s AI ventures, that started with Bing, are only getting bigger and better with time. From AI in Office apps, to a dedicated AI key for laptops, Microsoft has integrated the technology in just about everything that it owns.

 

In the AI domain, Microsoft has now shifted its focus off of Bing after being met with a lack of success that it had been anticipating for the platform. In place of Bing, the company has been giving the limelight to Copilot for some time now - the AI assistant that is now a part of Microsoft’s almost every key software and service.

 


In one of its latest efforts to make Copilot stand out, Microsoft has launched a new Super Bowl commercial for Copilot that is set to air on Sunday. The meaning incorporated into the commercial revolves around what makes Copilot special, AI’s creative solutions, and the stories of gamers with disabilities.

Microsoft Launches Superbowl Ad for Copilot and Several New AI Features



A latest feature in Google’s Bard chatbot is allowing users to receive responses to their questions in real time, meaning that you can see the answers as they are being generated. Previously, the answers could be viewed only once they were complete.

 

The real-time response generation option can also be turned off. Users can choose from ‘respond in real time’ and ‘respond when complete’ options from the icon in the top-right corner of Bard’s window. 

 

In addition to that, users will be able to cut off Bard while it is generating a response, if they wish to, with the help of a ‘skip response’ option. This would be useful when a user wants to type in another question without having to wait to receive the whole answer.

Google Brings New Features to Bard, Including Real-Time Generation of Responses



Earlier this year, Google announced a new AI project called Project Tailwind, which it described as an AI notebook where people could interact with their notes and train the model based on their document entries. The project was later renamed to ‘NotebookLM’ and is currently available as a prototype.

 

While NotebookLM works as a standalone app for now, it could likely be later added as a Google Docs or Drive feature where it can assess and read users’ files.

 

To use NotebookLM, you first need to create a new project. The app then prompts you to add sources, based on which it generates a ‘Source Guide’ - a summary of the entire document, along with some key topics and relevant questions to ask. Since the app is still in its early stages of development, it accepts only up to five sources, with each source no more than 10,000 words long, otherwise the app stops responding.




Get to Know Google’s New AI Research Tool that you Can Train on your Personal Docs



A new report suggests that Meta’s Next-Gen Ray Ban Stories Glasses will be updated with features specifically designed for live stream creators. Streamers will be able to use these camera-equipped glasses to stream video directly to Facebook and/or Instagram. The glasses will also enable them to communicate with viewers during the livestream.

 

To relay comments in an automated voice over the built-in headphones would be an impressive functionality and a big attraction for streaming stars. More and more creators are becoming interested in streaming videos on social media platforms, which only increases the likelihood for Meta’s success with the device.

Meta is Reportedly Working on Adding Livestreaming Functions to the Next-Gen Ray Ban Stories Glasses



In commemoration of the Apple Watch’s 10th  debut anniversary, Apple is reportedly working on a major redesign for the upcoming Apple Watch X. The device, however, is not expected to come out this year at least.

 

As for the next line-up of the Apple Watch series 9 arriving this September alongside the new iPhone, there will be a minor upgrade, possibly a faster processor. For every annual launch of a new Apple Watch, Apple has strictly persisted to introducing minor upgrades only, except when it launched the Apple Watch Ultra.

Apple is Reportedly Preparing a Major Redesign for the Apple Watch



In collaboration with American Airlines and Breakthrough Energy, Bill Gates’ climate investment fund, Google has developed contrailforecast maps that can create more sustainable flight routes.

 

Contrails are white condensation streaks that planes sometimes leave behind in the sky, and they account for about 35% of aviation's global warming impact. By avoiding certain routes that create contrails, pilots can reduce environmental footprint.



 

Google Develops and Tests Contrail Forecast Maps to Reduce Aviation Global Warming Impact



Ahead of the 2024 product lineup release, Apple is reported to be testing a new computer: a Mac Mini powered by the company’s latest M3 chip. The report has been shared by Mark Gurman at Bloomberg, who specifies that Apple may be testing an eight-core CPU and a 10-core GPU Mac with 24GB RAM that’s running macOS Sonoma 14.1. 

 

Gurman further emphasizes that any M3-powered iMacs are not likely to arrive before the first fiscal quarter of 2024, which starts in October. Additionally, the 13-inch MacBooks, namely MacBook Air and MacBook pro, are expected to receive M3-equipped updates during the same time frame too.

Apple Reported to be Testing a New M3-Powered Mac Mini



Not planning to leave audio behind in the world of AI, Meta has launched a new AI project ‘AudioCraft,’ designed for people to create “high quality audio and music” using text prompts. Does this mean that anyone can now produce music without playing notes, instruments, or specific skills? Seems so.

 

AudioCraft is created based on open-sourced research models of generative AI including MusicGen, AudioGen and EnCodec. The characteristics of the three models are as follows:

MusicGen is capable of generating music from text prompts. It has been trained with Meta-owned, licensed music.

AudioGen has been trained on public sound effects including environmental sounds and sounds of nature and animals. Like MusicGen, it too generates audio from text prompts.

EnCodec is a decoder, that Meta has just released a revamped version of. This model generates higher quality music with fewer prompts. 

Meta Launches New Music and Audio Creation AI Project ‘AudioCraft’

 

A variant of the language model behind Google’s AI chatbot Bard, was designed and launched for the purpose of answering health and medical questions. An improved version of the model, named Med-PaLM 2, is currently being tested at medical hospitals, including the Mayo Clinic research hospital.

 

Med-PaLM became the first AI system to exceed the US medical licensing exam’s pass mark of 60%. While Med-PaLM has not been capable of generating comprehensive answers as compared to clinicians, the technology has been found to maintain accuracy and safety in its responses. “We are still learning,” Google quoted about the system’s efficiency.  

Google’s Healthcare Chatbot ‘Med-PaLM 2’ is in the Process of Being Tested at Hospitals



Bloomberg’s Mark Gurman has reported that some of Apple’s AirPods may offer a new ‘hearing health’ feature with support of iOS 17. Additionally, he says that all of Apple’s upcoming headphones will be equipped with USB-C.

 

“Apple is preparing to give the earbuds a fresh boost. It’s exploring major new hearing health and body-temperature features, and is planning cheaper models and a transition to USB-C charging ports,” states Gurman in a latest Bloomberg newsletter.

Apple is Reportedly Working on a New Hearing Health Functionality in AirPods



During the announcement of its new Pixel Tablet, Google revealed another major development – the tech giant’s own weather forecast platform. This is big news for Pixel users considering that until now, they could only obtain weather updates from a simple widget on Pixel phones, displayed in a basic format.

 

Pixel Weather doesn’t exist in the Pixel tablet as a separate app but is integrated in the main Google app. Designed with enhanced visuals and information, it looks advanced and is not only optimized for small screens but makes use of the big screen just as well. A dark mode can be activated too, which tones down and inverses most colours without affecting the graphics.

 

In addition to displaying 10-day and 24-hour forecasts, Pixel Weather shows stats for wind, humidity, barometric pressure, UV index reports, current sun position, highs and lows, ‘feels like’ temperatures, as well as sunrise and sunset times. There are also details for precipitation, wind, and humidity for each hour of the day when you tap on a day in the 10-day forecast. In other words, Pixel Weather is just as efficient as the Apple and Samsung weather apps.

Google Launches ‘Pixel Weather’ within its App, Providing Users with Detailed Weather Updates and Forecasts


Google announced in this year’s ISTELive Edtech conference that it has partnered with Adobe on providing a licensed Adobe Express program across Chromebooks. The aim of this launch is to provide schools with easy access to Adobe Express so they can distribute the platform to students, hence encouraging and promoting creation of more immersive and innovative projects and experiments.


Google Launches Direct Access to Adobe Express across Students’ Chromebooks

Google just announced software updates for its Pixel phones, that also extend to the Pixel Watch and Fitbit devices. The latest feature drop includes camera enhancements, new personal safety options, and updates to haptics and adaptive charging.

 

Foremost, the most significant camera enhancement feature is the new ‘macro mode,’ that enables enhanced filming of little details. This, however, is exclusively a Pixel 7 Pro feature and not available for all camera apps or modes.

 

Moreover, phones from Pixel 6 onwards will be equipped with a new hands-free gesture where you show your palm to the viewfinder frame, to enable a timer when taking photos. Upon detecting the gesture, the camera will count down from 3 or 10 seconds, depending on the set number, and then snap a shot.

 

Another enhancement feature is the ability to create ‘cinematic wallpapers’ that add more depth to your lock screen, as well as a slick parallax effect. Cinematic photos are not a new Google technology as they have been a Google Photos feature for quite some time now. However, until now users didn’t have control over selecting photos for this effect, and will now be able to create cinematic effect for any photo of their choice.

Google’s New Software Updates for Pixel Phones Include Camera Enhancements, New Safety Controls and More

Subscribe Our Newsletter