Google Claims Its AI Can Guess How a Shirt Will Fit on Every Body Type - 7 minutes read
While some of us are gearing up for a Hot Girl Summer, Google is instead taking the shirt off of its AI features and letting them bask in the sun. One of the latest ways Google is trying to integrate generative AI into its existing products is helping users guess how a shirt for sale online might look in their size by replicating it onto dozens of models’ bodies.
On Wednesday, Google announced one of the newest features for its Shopping tab is the “virtual try-on” feature. Essentially, Google will take clothing brands’ items and will use its image generation via diffusion AI to superimpose a version of that piece of clothing onto a wide range of 40 different body types. The body shapes range from XXS to 4XL sizes, though Google told Gizmodo the feature is currently limited to women’s tops. The new feature is available for all users starting today.
When browsing for certain clothing items, users will see a “Try on models” button on the first image. Google will then accordion out a selection of sizes starting from XS to L and XL. The company was adamant this feature would cut down on folks who complain they don’t know how a shirt would look on them when buying online, and the AI will supposedly take into account how clothes tend to drape, fold, stretch, or wrinkle.
The diffusion model used by the virtual try-on is based on Google’s own Imagen, though the AI instead sends both the image of the model and of the clothing article to two separate neural networks that shares information between each other to create the final, hopefully realistic-looking, product. The AI model is trained on the data set contained in Google’s Shopping Graph that contains more than 35 billion product listings.
Some of those models shown in Google’s demos have a shirt tucked into their pants while others leave it loose. There’s no option to set how the AI depicts each model wearing the outfit. Not to mention, the AI is not actually predicting how good a shirt is, but simply how the article would fit on different body types if it were made well in those sizes.
The feature is also relatively limited to start. Only some brands are currently represented, including H&M, Everlane, LOFT, and Anthropologie. The company said it plans to expand to more clothing brands over time. Eventually, the feature should expand to work for both mens’ and womens’ clothing, meaning there are 80 different images of real-life models able to wear the AI-generated clothing.
Google Search with AI Beta Will Take User Reviews to Praise or Punish Travel DestinationsGoogle’s Search Generative Experience (SGE), announced earlier this year at its I/O event, is still very limited in who gets to use it, but it’s going to be a lot more willing to offer critique to restaurants, shops, or destinations starting this summer. Google said the SGE will start taking into account user reviews to offer commentary on local destinations based simply on users’ prompts.
Currently, when users type in “Is the [X] restaurant a good place to eat?” the SGE will not always offer up a suggestion automatically. When a user requests that the Search AI provide a response, the answer will be based on local blogs and other review sites (a common cause of consternation for websites, especially those who depend on traffic from Google).
Now, if a user asks the SGE whether a certain restaurant is good for large group sizes, or what the benefits of visiting The Bean in Chicago are, Search will now use reviews, photos, and the business profile page that users have submitted to Google. Those reviews should appear under the AI-generated text for the Search Labs beta users starting Wednesday.
The company told Gizmodo that the system will prioritize reviews with the most upvotes, but of course that doesn’t necessarily mean the review is even accurate about the restaurant. In the example Google showcased based on a local Indian restaurant in Manhattan, the AI praised the restaurant in one paragraph but also added that “some users” found the food was too expensive for what you get (a criticism which, if you’ve been around Manhattan, could be applied to most restaurants).
Google maintains that this feature is still experimental. User reviews aren’t necessarily the best gauge of real quality, and restaurants or other shops could be rightfully miffed if Google itself is offering users some complaints about their establishments. It’s still unclear if Google’s AI will always offer both positive and negatives about an experience, or if there are enough angry user reviews for an establishment that the AI response would reflect that negativity.
Lens Will Soon Gain Access to Bard and Can Try to Identify Skin ConditionsAs announced at Google I/O in May, Google plans to integrate Google Lens into Bard to let users generate prompts based off images. Effectively, this allows for a kind of multisearch functionality but for the company’s still-somewhat-wonky chatbot.
In a demo, Google showed how Bard could identify a pair of sandals then come up with similar products people could wear, or even offer more ideas for how to style those sandals. It’s similar to how the Lens functionality currently works in Google search, and Bard will include a new button that takes users directly to Google Search to browse more of what Bard suggested.
Google said that feature set should be rolling out in the next few weeks. Bard is still restricted to the U.S., and just yesterday the company said it would delay its release in the EU citing the European body’s concerns over user privacy.
Lens is also getting a little more personal with users, especially those who obsessively search for images to compare to skin blemishes. Google’s image search functionality will now be able to look at people’s skin conditions and offer suggestions for what it could be.
Ignoring the awkwardness of looking too closely at somebody’s mole or rash, the program was able to identify a skin tag on a person’s body. We didn’t get to fully test all the websites Google pulled from for its citations, though for the hypochondriacs out there, it may be another way of obsessing over bump and skin blemish.
Google Maps on Desktop Will Help you Plan Complicated TripsGoogle shared a few ways to make planning your road trip a bit easier this summer. Google Maps is planning a new “Immersive View for Routes” which actually performs a flythrough of what roads you’re expected to take, and even show some of the expected traffic for the time of your trip.
In select cities, when users select a route, a small icon pops up in the bottom left corner of the mobile app showing the path in detail. Google said the new feature uses AI to simulate the traffic and weather at the time of departure and arrival, and it also links to live map data to show slowdowns or traffic accidents along the route. It could certainly be handy for folks who have never been to a location and would like some idea of what landmarks to spot when making a precarious left turn. Google says the feature will work for driving, biking, and walking routes.
The feature will be available for 15 cities to start with, including New York City, though it will likely be tied to the same cities featured in Google Maps Immersive View mode. The only other hint is that it should be available worldwide later this summer.
Maps is also getting a few small upgrades, including a new reorganization of “Recents” on Desktop to more easily organize and share destinations with road trip buddies.
The long-awaited Immersive View finally reared its head earlier this year. The number of cities with full 3D views was limited to start, but now Google is adding five new famous cities, including Venice, Florence, Dublin, and Amsterdam to its slate. The company is also adding another 500 landmarks around the world for users to see a top-down scan. The view lets users see what the exterior of the location looks like, and also gives hints of expected weather and when tourists are likely to be swarming.
Source: Gizmodo.com
Powered by NewsAPI.org