2015-06-19



Ups and downs of the new Google Photos

Unlimited and free photo storage. The prospect of image searching and automated organization, powered by Google’s search engine machine learning smarts. Two things you can’t get with competing services.

On the surface, Google Photos sounds not just good, but great. But what’s the catch? And does it stand up to the test of thousands of images? That’s what I set out to find over the past two weeks. And my results were, surprisingly, decidedly mixed. Google Photos does a lot really well, but it’s not even close to the slam-dunk it sounded like when we first heard about it at Google IO 2015.

Why Google Photos

The ubiquity of good cell phone cameras and the rise of the selfie have led to an explosion in how many images all of us take. The stakes are high in this image gold rush: What’s the point of taking all of these pictures if we can’t find them? Finding those images again remains a challenge, given the arduous drudgery of manual tagging. Whether you’re tagging baby pictures or the Olympics, it’s a sentiment shared by both casual users and pros alike. Just ask any professional who uses IPTC metadata to manually embed image tags and captions for the massive stock image databases, and you’ll get a universal look of chagrin over the necessary evil of tagging.

This is why Google, and others, are pursuing ways to make it easier for us to find, share and enjoy the images we take. And it’s why the idea of Google Photos – with the potential of all of my photos in one place, with an automated tagging system – seemed so appealing. (Appealing so long as one is comfortable committing all of their memories to the Googleverse, but that debate is a conversation unto itself).

Other services have offered bits here and there, but I still refrained from going all-in. I’m a shutterbug both personally and professionally. I go through 2TB hard drive as often as some people change social media cover photos. For this reason (and others), services like Flickr, with its cap on 1TB of free storage, never appealed to me.

Likewise, while Adobe Photoshop Lightroom and Camera Bits Photo Mechanic each have their merits for assisting with labeling, neither assist with what you’re looking for across disparate mobile, desktop and cloud.

This trifecta of challenges – storage, search and organization – are what Google set out to solve with its revamped Google Photos. And knowing these challenges first hand, I was eager to apply Google Photo’s image search and categorization technology to a random selection of my own images. With unlimited photo storage, it meant I didn’t need to be selective or cherry-pick the files I sent to the cloud; nor would I reach a point where the service would warn me I’m running out of space and I’d need to delete items to make room for new ones.

The immediate hook of the service that resonated when it was introduced at IO was its unlimited, free storage for images up to 16 megapixels. If your image is larger than 16 megapixels, it will be downsized to meet the filter; likewise, even 16 megapixel images will go through some optimization for storage on Google’s servers (which means it will be a smaller file size when you export it from the cloud).

If you want to store RAW files, your original images, or images larger than 16 megapixels, you’ll need to opt for the paid version of the account, at $9.99 (about £6.29/AU$12.89) a month for 1TB of Google Drive storage. That 1TB is shared across Drive, Gmail and Photos. If you start with the free account and decide to upgrade, you need to reload your images to get the full benefits of the full copy stored.

Ready, Set, Upload

To get 31,041 – approximately, and at last count – of my images uploaded was a surprisingly arduous affair, one that required multiple overnight sessions of my laptop feeding photos up to Google’s servers via a 12Mbps upstream Comcast cable modem. Two-thirds of those images came from my laptop’s SSD, but the final 10,000 came from other sources. Most were JPEGs, but some folders had both the JPEG and RAW files (and in those cases, both got uploaded according to Google, not that I noticed this from my image library), and still others had videos mixed in with the still images. Photos will display videos as well, captured at up to 1080p.



The images I chose very purposely represented a large mix of sources. I shot the pictures over the years using various point-and-shoot cameras and digital SLRs. Only the images captured on my cell phones had geotagging. And the folders represented a variety of workflows: Some were images meticulously filed into subfolders by event, some were simply in generically labeled folders that represented the entire 64GB card’s worth of pictures, which could have represented images from the same hour or images from multiple months. In still other cases, I had images grouped together as best shots, organized in a way where I may not know the date or the event, but just had them in a folder that I knew why they were organized together that way.

For my tests, I used a Samsung Galaxy Note 3 and Note 4 phone, a Google Nexus 9 tablet, and a Toshiba touchscreen Ultrabook. I started out using the desktop app first, and simultaneously began uploading phone images, too, via the phone app.

In demos, Google Photos looks streamlined and convenient. Pinch-and-zoom to sort the images by month, date and year. And scroll, scroll, scroll to find what you’re looking for.

While I like the design conceptually, in practice I found it less useful to navigate with a large set of real-world images. The experience was superior on mobile vs desktop. On the phone or tablet, if I knew something happened in a given month and year, great. I’d pinch and zoom out to the year view, and then scroll, scroll, scroll, and wait for the thumbnails to redraw on the device, as they’re fed from the cloud.

Find the right year and month, then pinch-and-zoom into that month, and then I’d keep scrolling through the hundreds of images there to find the specific shot or series shots I was looking for. Pinch-and-zoom further into the month to get images separated by day. When your library is packed with thousands of images shot on the same day, or cluster of days, this visual approach is not the most efficient way of finding images, but at least it gives you a starting point.

It’s worth noting that while the initial thumbnail draws can be painfully slow, this redraw process did get faster the more often I rapidly scrolled through eight years of images over a Wi-Fi connection. I was impressed with how quickly and smoothly images opened on the phone, and how I could zoom into an image without waiting for it to redraw.

On the desktop, I was much less impressed. You’ll access Google Photos via your web browser, at photos.google.com. Access is simple if you’re already logged into Gmail – just type the URL and your most recently uploaded photos appear, with a search bar at the top. No additional log-in required. That’s the good part.

The not-so-good: On the desktop, you lose that much discussed ability to visually browse images with pinch-and-zoom. According to Google, it was a choice to launch without browser-based touch touchscreen support for pinch-and-zoom, but it is possible we’ll see that at an undetermined point in the future.



I’d go so far as to say it’s not practical to visually browse for images on the desktop, not in the way you can by month, year or date on mobile. If a date or month is recent, and has a few hundred images, that’s somewhat manageable. But up the game to thousands of images on a given date two years ago, and the repetitive and tiresome scrolling makes this interface infinitely less efficient. Or pleasant, for that matter.

It helps if you know the specific date to browse by. This can be useful in some instances, but rare is the occasion you remember the precise date you took a shot (weddings, birthdays, anniversaries are perhaps the common exceptions). For example, I knew that I’d taken some pictures outside of Bangkok, images that weren’t being recognized as from Thailand, back in October 2014; but I didn’t remember the specific date. By searching on the month of Oct 2014, I could scroll and eventually find the image.

Another circumstance this approach worked for: Finding related images that weren’t being tagged or picked up in other searches. For example, I did a search on Temple of Heaven, to find a series of old slide images I’d scanned back in September 2014, using an external scanner.

The search successfully found the three scan attempts of that image; however, not the other two images I remembered scanning at the same time. I searched on the date – oddly showing up as Jan 1, 2011, likely because of a setting in the scanner I hadn’t changed at the time – and found the remaining photos. For one, a closeup, isn’t hard to understand why Google missed the proper tag. The other, a distant view of the Temple, is harder to explain away. In other examples, Photos has shown it can detect and recognize objects, regardless of whether they’re in the foreground or background. But it didn’t in this case, and there was no way for me tag them as such without creating an Album, which isn’t the same as tagging.

Another thing to note: Photos is fluid, and the presentation of which images are shown in the thumbnails for People, Places, and Things can change over time, without you necessarily realizing it. This is a mixed bag: it’s good in that it seems like Google is learning and evolving in background, but it means if you were looking for something where it was before, it can be weird and jolting to have one image replaced with another as thumbnail seemingly out of the blue.

Identification headaches

This leads to one of the big potential benefits of Google Photos: Google’s machine learning and neural net engines applied to automatically tag and group images. Aside from the unlimited storage, this was the one thing that I looked forward to most after Google Photos’ introduction. But after using it for two weeks, my Google Photos experience has been decidedly mixed.

The default Google Photos view is a reverse chronological representation of images, by date. If you want to search for something specific, or you want to see Google’s automatic groupings, you’ll need to tap in the search bar (in the browser view) or tap on the floating search button on mobile.

Google groups images for you, based on three basic tenets: people, places and things. Given the sheer volume of images I uploaded, though, I was surprised that Google didn’t identify more categories. For example, it found 60 people, and generated thumbnails for those clusters. For things, it was only at 42.

I’ve seen plenty of instances being written about how easily Google can match progressions of a child, or find a man’s face in the background of a picture. But my data set, and experience, is different than that: Of the 31,000+ images in Google Photos, I’d guess easily half of those were of gymnasts. A Holy Grail for any photographer is for the software to find and group all of the images of a particular person, regardless of the event, group or how many other people are in the group. This is where I expected Google Photos to excel, but instead it stumbled on my large and varied data set. In fact, the accurate groupings were so few and far between they felt like happy mistakes more than the intended result.

Often people in the images under the thumbnails were not the person represented in the thumbnail itself. While there were often shared characteristics – for example, blonde, pony-tailed young women in leotards – the reality was they were indeed different people.

That I saw certain physical similarities in images clustered as a single person was pure coincidence, based on what Google tells me. According to Dave Lieb, product lead at Google, the face grouping only uses attributes of the faces, not any details of the hair style or clothing. That said, I looked at the image clusters that perplexed me, and the reality was in many instances, the facial structures that were mistakenly grouped together looked nothing alike. Another reality, and more worrisome: The algorithm found some images, but nowhere close to all of the images uploaded of the same athlete. In one case, the algorithm found images of the same young woman, and gave her two different thumbnails – with no duplication of the images between the two. And the much talked about age-progression facial recognition? In one instance where I uploaded images of an athlete from both junior and senior competitions, the algorithm didn’t pick up on that.

Google Photos’ search and retrieval, and tagging, tries to mimic how humans perceive photos. But, the service lacks the more random finesse that humans add to the equation. Once uploaded into the Google cloud, the folder structure is flattened out and disappears. In addition to identifying images based on the content, Photos also uses geotags, timestamp, existing metadata that the service can read (some of my images had IPTC captions), and data from the folder something was filed in (but, if there are nested folders, it doesn’t capture the info at the top-level).

Google says Photos will learn from your efforts to manually weed out false positives. However, doing so is a chore, and can only be done on the smaller mobile screens for now – which makes doing so across large volumes of images even more difficult. I appreciate that Photos gets us as far as it does in finding people. It’s frankly better than any other free solution today. But the lack of consistent identification is a concern, particularly when coupled with entries that just aren’t finding all relevant images.

I had similar experiences with the images classified under Places and Things. If an image lacked a geotag, Google Photos was inconsistent recognizing where images were from, and what they represented. Gymnasts were identified correctly as “Gymnastics.” And I could almost understand the images that ended up classified as “Dancing” and “Circus.” But the same types of images were also identified as “Basketball,” “Wrestling,” “Ice Skating,” “Table Tennis,” and “Volleyball.” And there was no way to reclassify those images back to “Gymnastics.”

It wasn’t just the gymnasts that ended up all over the map. Photos rightly identified a stuffed puppy as “Dog” (along with live dogs), but a teddy bear ended up under “Sheep” and “Bear.” And only as “Bear” five days after the first images were uploaded. Google says the indexing is not instantaneous, and that matches my experience. Lieb notes that sorting begins within 24 hours of backup, and continues on a 24-48 hour basis. This explains why searching by some data points (i.e., the original folder name or the location) didn’t work until four or five days passed.

Another interesting point is how the recognition works, period. The beginnings of the recognition engine lay in what we saw introduced a couple of years ago with Google+. The root here is machine learning technology, and that base technology is similar to what powers Google Image Search, but Lieb says “the clustering and search quality technologies are specifically tuned to personal photo libraries.” So, perhaps, that explains why the teddy bear photos were identified as “Bears,” while Google Image Search saves “Bears” for the live, breathing variety of bears, and identified the stuffed animal as a “Teddy Bear”.

Sometimes, I had luck with typing in more abstract search terms. Typing “river” yielded results that included the Chao Praya river in Bangkok, and other waterways like Victoria Bay in Hong Kong. But the only way I could find a river scene from Bourton-on-the-Water was to search on the date of the one photo that Google Photos successfully identified as Bourton-on-the Water. “Sidewalk” turned up shots that included a sidewalk. “Tablet” turned up photos of tablets … and phones, too.

Other times, I had no luck at all. Typing “child” yielded images of college-age gymnasts that don’t look like children and of people of all ages, ranging from a woman holding a baby to an elderly relative posed alone. So much for narrowing the search.

Searching by criteria like the camera or device used, or other shooting metadata, might be useful in such cases, but that feature doesn’t exist. A competing service, Eyefi Cloud, will be offering this feature for its $49-a-year unlimited storage service (which includes storing the full-resolution originals).

It’s not all bad

I’ve spent a lot of time discussing the foibles and pleasures of search, because that’s such a critical component of what Google Photos offers. However, I’ve found one of the best parts of Google Photos to be the easy, and automatic, generation of creative collections involving your creative stuff. While the image search and retrieval gave me fits and starts, the creative collections and the random sense of photo rediscovery via the automated Assistant is a trip, and well worth the time investment of uploading my images.

Collections can mix both photos and videos together. You have a choice of three types of things to create: Albums, which is as you’d expect – an album of photos and videos; Stories, a more visual timeline approach to showing photos and videos; and Movies, a video montage photos and videos. On phones and tablets, you can also create animations and collages, but this is not on the browser version.

The Assistant is something you can choose to enable, and having it on is both amazing and terrifying in terms of what it comes up with. Long-buried images suddenly get resurfaced, and Photos does so in fun ways. It’s not so much that what Photos does is unique – other apps can create GIFs, for example – but it’s how Google Photos automates the process.

I found some of Photos’ pairings for stories odd: the two cities it grouped together didn’t make any sense to be together, even though the dates were near one another. But I loved how Google Photos found clusters of burst-shot images – something I commonly shoot on my digital SLR. It’s like an instamatic GIF creator, without you having to do the digging through your archives to find a random series of images. And it’s super easy to share these creations out to friends and social media.

Mind you, the saving process itself needs an overhaul. Once you save one of the Assistant creations, it saves the image back into your “library,” by the original date. Which means you have to know how to find that image – one that was resurfaced by Photos, and you may have no idea when it was taken, or if/how it was tagged. See the above discussion for the problem with that. A search by “animations” will find those files, but that’s going to get unwieldy fast, and that trick only works for animations.

I also liked the auto backup feature, for both mobile and desktop. A light desktop app provides a system-tray app for simplifying backups. Auto backup alone isn’t the draw for Google Photos, but it was fun to see images populate quickly in Photos after shooting it on my smartphone.

Life with Photos

With all that I’ve talked about, I could get into so much more with Photos. I didn’t even touch the conversation of how the image quality compares with an original image. Nor did I get into editing images (imaging is not destructive, but you do have to be aware of how you save images depending upon platform, and whether you want your new creation synced back to your device, overwriting the view of your original image).

That said, my time with my 31,000-plus images in Google Photos taught me a lot about how the service can be best used. Since this is your personal cloud, and not something you have to worry about privacy permissions for, the choice to use the free Photos is an easy bet – especially for images captured on mobile devices. If you’re already invested in the Googleverse, it’s especially easy to acclimate to and integrate with.

Mobile users will get the most out of Photos, for a number of reasons. For one, the app on mobile is far more full-featured than the browser-based service. Photo’s search capabilities will benefit from the fact that images captured on mobile already have geotag data; that coupled with the auto-creativity means Photos will make it easy to share content, too. Finally, since our mobile devices have finite capacity, and your images are more likely to get lost when your phone ends up in the back seat of a cab or at the bottom of a pool, the cloud storage and backup components are compelling. And did I mention it’s free?

Photos’ proposition for your legacy image collection is a much harder sell. Whether it’s worth the time and effort will depend in part on your existing approach to image organization, and on how much value you put on the free backup, even if it’s of slightly downsized images.

Google Photos’ image search and recognition technology is promising, and even mind-blowing in some ways, but it’s not quite there yet. I’d guess it’s only about 50% of the way towards the ideal, based on my experience across tens of thousands of images. Perhaps I’d have had a different experience if I’d had fewer nested folders, which would have meant more images would get an extra tag automatically applied.

Ultimately, I look at Photos as a good free choice that supplements, but doesn’t replace other cloud or physical backup options. Regardless of the slight image size and quality compromise, uploading to Google Photos gives you an extra backup, and it does so at no extra cost to you beyond the time to upload your collection.

The easy creativity and random image reveals by the Assistant are great fun, and help you enjoy your photos in a whole new way (and, sometimes, you’ll find images you wish stayed hidden and forgotten). When the search works as you’d expect, it does help you root through your digital image shoebox in a satisfying way. And when it doesn’t, you’ll get frustrated as I did, especially since it feels like Photos is so close, yet so far. Is Photos 2.0 ready yet?

Show more