When it comes to developer tools, Google made them rain like a dance enthusiast bestowing dollar bills on a virtuoso performer Thursday at its Google I/O developer conference in San Francisco.
The announcements covered updates on existing software tools, as well as brand-new ones for everyone from game developers, virtual-reality media creators, as well as app makers trying to connect our smart homes.
Now it's time for developers to strut their stuff and show what they can do to earn their keep.Android Studio 1.3 Fills In The NDK Pothole
Two years ago, Google announced Android Studio, an IDE (integrated development environment) for building Android apps. Although it has since graduated out of beta, some developers were still stymied by it—mainly because, if you used Google's Native Development Kit (NDK) to use C and C++ code, you were out of luck.
The company’s new upgrade, Android Studio 1.3, now offers built-in NDK support. The news may surprise folks who have been paying attention to Android development, primarily because Google generally discourages the use of the NDK, unless it’s absolutely essential. The company is a staunch believer in using Java to write Android apps.
That doesn’t quite cut it for certain types of apps, like games that rely on physics simulations and other intensive processes. Sometimes, you just can’t beat native code.
Android Studio releases will work somewhat like Chrome: Developers can stick with the more standard “Stable” avenue; dive into possibly buggy, but new features with the "Beta channel; or the very newest (read: most likely very glitchy, but probably coolest) updates available through “Canary" releases.
As such, the very latest studio version 1.3 will hit Canary first, so if you've got a strong constitution and are willing to deal with some (likely) bugs, you don’t have to wait for the wrinkles to smooth out later.
More from Google:
Android Studio v1.3 Preview - To help take advantage of the M Developer Preview features, we are releasing a new version of Android Studio. Most notable is a much requested feature from our Android NDK & game developers: code editing and debugging for C/C++ code. Based on JetBrains Clion platform, the Android Studio NDK plugin provides features such as refactoring and code completion for C/C++ code alongside your Java code. Java and C/C++ code support is integrated into one development experience free of charge for Android app developers. Update to Android Studio v1.3 via the Canary channel and let us know what you think.
Google didn't leave out Web developers either. The newly announced Polymer 1.0 lets developers make their desktop and mobile Web apps and services feel more like native apps running locally on devices. They can add toolbars, menus, maps and other features.Brillo And Weave Aim To Make Smart Homes Simpler
The tech giant took aim at the complexity inherent in the Internet of Things (IoT)—the movement to connect gadgets, sensors, appliances and more to the Internet and each other—in particular, the smart home niche. Along with its Nest division, the company introduced Brillo, its now-official operating system for the IoT, and Weave, which will give developers a common language so they can talk to each other.
Brillo, essentially a modified form of Android, gives budding smart-home players a very small, lightweight, and more secure platform (or so Google promises) that can power a variety of hardware products.
It’s downright stingy when it comes to power management, to help spare batteries in things like sensors and other connected hardware that may not get jacked into a power outlet.
Brillo, which ties into a central command console, will support wireless connectivity through Wi-Fi and Bluetooth Low Energy.
Google bills Weave as its cross-platform communications layer—which means it will allow a variety of devices, not just Android gadgets, to talk to each other. The company says that with Brillo and Weave, it will offer a modular approach that lets developers use one or both. Google will also produce a certification program, to outline specifics and guidelines.
Of course, simplifying the often frustrating, complicated smart-home-using experience is a popular mission. Groups like Qualcomm’s AllJoyn Alliance and frameworks, like Apple’s HomeKit all share a common goal of getting all those lights, door locks, moisture sensors and more talking to servers in the cloud and other local devices more easily.
Brillo’s developer preview will launch some time this fall, with Weave to follow later in the year.
These tools just scratch the surface. The company unleashed a slew of other tools in the Android M developer preview covering fingerprint scanning authentications, cloud messaging, app install ads, a new developer console and more. Google also revealed its Cardboard “Jump” plans to help virtual reality video creators piece together their own rigs. For the full rundown, visit the Android Developers blog.
The company did not make any major hardware announcements at its keynote address, though it paid some lip service to products and initiatives such as Chromecast, Android Auto and Android TV.
Instead, it relied on its software and cloud presentations (including Now On Tap and Google Photos) to bring the excitement. For app makers, the company may well have succeeded, if the new tools fill the gaps and simplify things, as promised. We’ll know how well they work once the tools actually get in developers' hands.
Photos by Adriana Lee for ReadWrite
These days, having the ability to capture digital photos and videos on a whim with our mobile devices of choice has become almost effortless. What's proven less than effortless is how we organize this overwhelming accumulation of data over time.
Most of us wind up with a virtual shoebox of digital imagery that we hoard in hopes that it'll come in handy at some unforeseen point in the future—like a scrapbook that never gets made.
At the Google I/O conference on Thursday, Anil Sabharwal, a director of product management, said that by "applying machine language and intelligence" to this chaotic smorgasbord, Google Photos will automate that process. If that doesn't seal the deal, Google is giving us unlimited storage for all of these memories and moments, to boot.What's The Big Idea, Google Photos?
Google Photos is the successor to Picasa, an early Google acquisition, and Google+, Google's photo-centric social network.
This third attempt at the photo-sharing market centers around three big ideas, Sabharwal said: giving users a safe, private home for all of their photos and videos that's accessible from any device; taking "the work" out of photos and letting users "focus on making memories, not managing them"; and giving them an easy way to share and save these memories with others.
He demonstrated Google Photos with a gallery that was organized not only by time, as we've seen from practically every other photo-organizing app under the sun, but by context. People, places, and things that feature most prominently in a user's collection are moved to the front, where they're most easily accessible. Browsing from image to image seemed very fluid and quick in the demonstration.
These images can be easily grouped and summoned by their context—people involved, or an event like a snowstorm in Toronto, as Sabharwal showed us—and edited on the go to make "collages, animations, movies with soundtracks, and more." Google even uses facial-recognition technologies to group people together.How Safe And Unlimited?
If you trust Google with your email, as Gmail users do, or already use Google+ for photos, this probably won't give you pause. But if you're used to keeping the bulk of your photos stored locally on a desktop computer and only selectively uploading certain photos, you'll have to think through whether you're comfortable with the idea.
Google is hardly alone here: Amazon, Facebook, Dropbox, and Apple, too, are pushing people to store all their photos online by default, while only sharing some publicly or with friends.
Google Photos seems like a promising addition to the stable of image and video organization tools available today; we'll keep an eye on how it lives up to its claims and report our full findings once we get a chance to play with it.
Google Photos will be available for the Web and iOS and Android devices today.
Photos by Adriana Lee for ReadWrite
Google has officially revealed Jump, an open-source VR platform that includes plans for a 16-camera array capable of filming 360-degree, three-dimensional pictures and video.
Clay Bavor, the creator of Google Cardboard and a vice president of product management at the company, announced the new platform at Google I/O Thursday. Here's how he described it:
It’s about capturing and sharing these real-world experiences, like the great wall, the coral reef, in an entirely new way, one that looks and feels like you’re actually there. Because the world is filled with all these awesome places and events, like Great Barrier Reefs, and Golden Gate Bridges, and birthday parties and mountain tops.Jumping In With Hardware And Software
There are three main components to Jump: the aforementioned 16-camera rig that relies on “very specialized geometry,” an assembler that takes raw footage and compiles it into VR video, and a player—which Bavor revealed will live inside Google-owned YouTube.Google Cardboard inventor Clay Bavor explaining the three parts that comprise Jump
The plans for the array itself will be made available to anyone who wants to make one, meaning that, just like Google Cardboard’s launch from 2014, any company will be able to make and sell a Jump array.
Moreover, users can buy any still or video cameras off the shelf and slide them into the rig to start taking 360-degree 3D videos, though Google also announced a partnership with GoPro, which will result in what’s likely to be the highest-end iterations of the Jump camera set up. No word yet on when we’ll see that, or how much it’ll cost.GoPro's Jump rig
With regard to the assembler, which pieces all 16 cameras’ footage into one VR video, Bavor explained it relies on “computational photography, computer vision, and a whole lot of computers to recreate the scene as viewed from thousands of in between viewpoints everywhere along the circumference.”
The result is not just a circular video, but a stereoscopic one, meaning that it’ll offer users full 3D video in whatever headset they decide to use. And since the videos will be available on YouTube, there will be very few barriers between content and potential viewers.
That’s what makes Jump such a huge reveal: Google has managed to offer a solution to the big VR content problem. Having a virtual reality headset is all fine and good, but unless you have something to use it with, it’ll gather dust. With Jump, professional and amateur video creators will suddenly fill the VR void with tons of videos on YouTube.
We’re about to enter the first VR video boom.Cardboard and Expeditions
While Jump was the biggest reveal of Bavor’s presentation, he also offered up exciting details about the future of Cardboard, and a new educational initiative called Expeditions.
Since 2014’s Cardboard model was built for midsized phones, 2015’s model takes the phablet craze into account, supporting handsets with displays of up to six inches. The new Cardboard’s construction has also gotten a lot simpler, with the magnetic ring being replaced by a single button, and a three-step construction process.The new and improved Google Cardboard, which now supports devices with screens measuring up to six inches
Best of all, the new Cardboard will also be compatible with iPhones, revealing a willingness on Apple’s part to let Google through the App Store door for its virtual-reality efforts.
Expeditions, meanwhile, will bring Cardboard headsets and handsets to schools looking to dive into a virtual reality-based curriculum, controlled by one teacher tablet.
While it’s too soon to say whether or not Expeditions will truly take off, it seems like a low-cost way to prove the educational possibilities of virtual reality in general, and mobile VR specifically.Google Expeditions
Expeditions will start to roll out to more schools later this fall, while Jump will get underway this summer. The VR revolution seems to be just around the corner, and Google looks to be leading the charge.
Screenshots by Brian P. Rubin for ReadWrite
With Cortana expanding to iOS and Android, and Apple working on upgrades to its own personal assistant, Google Now can't take anything for granted. At Google I/O in San Francisco today, we got a preview of Now on Tap, a new feature arriving with Android M.
As the name suggests, it works with a tap-and-hold on the Home button. The clever part is that Google Now can scan whatever's on screen—whether a chat conversation or a Web page—and bring up relevant information.
See also: Here's What's New In Android M
One example shown on stage was a movie mentioned in an email: Now on Tap brought up reviews, information and a trailer with one push. In another example, tapping on a photo of Hugh Laurie on a website brought up background information about the actor.
Now on Tap was also able to scan a thread inside a third-party chat app and create a reminder to pick up dry cleaning, because that's what the conversation was about. Another demo showed a Skrillex song in Spotify (not a Google app of course); asking "what's his real name?" brought up the correct answer.
In short, it makes Google Now better able to identify key items of interest within apps, parse natural language and prompts, and then take action on them. This is on top of other recent improvements to Google's digital assistant: it now works with third-party apps and is expanding its reach into more areas.
"Too often, you have to leave what you’re doing just to look for what you need somewhere else on your phone," explains the official blog post. "With Now on Tap, you can simply tap and hold the home button for assistance without having to leave what you're doing — whether you’re in an app or on a website." No effort is needed on the part of developers provided their apps are indexed by Google.
Aparna Chennapragada, Google Now's product director, emphasized the three pillars of Google Now: Context, Answers and Actions. It's clear that in the digital assistant race, Google is keen to stay ahead of the competition. Now on Tap is going to arrive with Android M in the third quarter of 2015.
Screenshots of Google I/O taken by ReadWrite
Dave Burke, Google’s vice president of engineering for Android, took to the stage at Google I/O Thursday to reveal a few cool new features that’s coming to Android M. The new features focus on refining the user experience that was ushered in with Android L last year.
“We’re working incredibly hard to release our most polished Android release to date,” said Burke.App Permissions
To start, Android M will offer users finer-grained control over app permissions. Currently, when Android users download new apps, they grant broad permissions to various features and data on a device. This has been a source of frustration for users and at times a cause for security concerns, with apps from little-known developers asking for too much personal data.
With Android M, users will have the ability to decide which features each app will be able to access.
For instance, Burke demonstrated the new permissions feature using WhatsApp. When he tapped the microphone icon to send a voice message, a dialogue box appeared to ask if he would grant permission for WhatsApp to use the device’s microphone. Permissions will come up each time an app wants to access a different part of your device’s system.Chrome Tabs & App Links
Apps that give users links on the Web will now have the ability to create in-app Chrome Tabs, rather than launching the Chrome app separately. Even better, those in-app tabs will still retain a user’s Chrome data, including profiles, preferences, and passwords.
A feature called New App Links gives developers the ability to eliminate the annoying "disambig"—short for "disambiguation"—boxes that pop up when there are multiple ways to access particular links. When someone emails you a link to Twitter, you’ll be able to click it and jump right into the Twitter app to see it, rather than have to decide between Twitter or your web browser each time.Android Pay & Fingerprint Support
The name says it all: Android Pay is Google’s new mobile payments service to compete with Apple Pay. Burke said that Android Pay will be available at 700,000 stores across the United States, and will work at any payment terminal equipped with NFC. To use it, users will simply unlock their phones and wave their devices on the pay terminal.
Android M will also bring fingerprint support to apps, Android Pay in particular, though it’s contingent on devices having those sensors built in by manufacturers. Devices like Samsung’s Galaxy S6 and Note 4 have fingerprint sensors, but with M’s support, expect to see those sensors on a lot more devices before too long.Doze
Burke also explained that Android M will bring a new power-saving feature to devices called Doze. Using devices’ built-in motion sensing capabilities, Doze will know whether or not a devices is in someone’s hands, and will go into a deeper powered down state to save battery in the long run. However, it won’t turn off entirely, since it’ll still be able to activate alarms or wake up for incoming chat requests.
Burke said that two Nexus 9 tablets were tested head to head—one with Android L, one with Android M. The Android M tablet was able to last two times longer than the one with Android L.Other Details
Soon Micro USB will disappear from Android devices as Google is bringing in USB Type C support. Android M will also bring in smarter text-selection controls, as well as better control over volume streams.
The developer preview for Android M is available today, and the official release is set to hit devices starting in the third quarter of 2015.
Screenshots by David Nield and Brian P. Rubin for ReadWrite
iOS already has its own personal assistant app in the form of Siri, but it seems Apple wants a more direct competitor to Google Now: 9to5Mac reports that a new service called Proactive is on the way.
With deep ties to Siri, Contacts, Calendar, Passbook and third-party apps, Proactive would reportedly surface timely and relevant information in the same way that Google Now does. This could be hugely useful on the Apple Watch as well as the iPhone and iPad.
While Google Now and Siri have several features in common, Apple's app concentrates on controlling devices and running searches using voice input. Google Now focuses more on being an intelligent assistant, mining collected location, search and email data to automatically show alerts (like flight delays) when they're needed.
And that appears to be what Proactive is targeting.
"Proactive will automatically provide timely information based on the user's data and device usage patterns, but will respect the user's privacy preferences, according to sources familiar with Apple's plans," says 9to5Mac.
The roots of Proactive can be traced back to Apple's 2013 acquisition of personal assistant app Cue, which enabled users to "know what's next" based on calendar and email information. With the notification and search capabilities inside iOS growing, Proactive is a logical next step.Battle Of The Digital AssistantsCortana on Android.
Google Now has become the major component of stock Android—the unmodified version of Android that Google is increasingly pushing on phone makers—and is available on iOS devices and inside the Chrome browser too. With Microsoft's Cortana assistant spreading out to Windows 10, iOS and Android, it's time for Apple to make a move.
Siri has always been seen as Apple's Cortana or Apple's Google Now, but it lacks the smart, pre-emptive elements found in Microsoft and Google's products. Proactive would plug that gap—9to5Mac says it will show estimated travel times to scheduled events in exactly the same way that Google Now does.
It's another sign of the growing importance of these digital assistants and the ecosystems they tap into: Will we be choosing our next phones based on the digital assistant we get on best with? Or the one that knows most about us from our emails and searches?
9to5Mac says Proactive could even rearrange apps based on the time of day and usage patterns, and that third-party app integration will be an important element of the new service. If Proactive arrives with iOS 9, as is expected, we'll be hearing about it at WWDC.
We'll have to wait and see just how comprehensive the new app ends up being, and how well it competes with Google Now and Cortana. One thing we can predict with a good degree of certainty: It will only be available on Apple's platforms.
Images courtesy of Apple and Microsoft
It’s already pretty easy to go shopping on the web, but Google may have a way to make it even easier. A new “Buy” button is set to make an appearance alongside Google Search results, said Google Chief Business Officer Omid Kordestani at the Code Conference on Wednesday.
“There's going to be a buy button,” said Kordestani, as reported by the BBC. “It's going to be imminent.”
The Buy button would allow users to stay within Google’s search platform, reducing the “friction” between the impulse to buy and actually doing so. Users won’t have to navigate to retailers’ websites to follow through on their shopping desires, while Google will presumably collect a small percentage on each successful purchase.
It’s unclear just now how the system will work in practice, though—including whether retailers will be able to opt in or out of the system, and what impact refusing the system would have on search rankings.
Chances are very good the tech giant will reveal more details at the Google I/O conference. We’ll know soon, but one thing is clear now: Google definitely has plans to make it easier for you to spend your money online.
Google I/O, the company's annual developer conference, is upon us, and we'll be scouring the event for what's new and noteworthy. Were our predictions right on the money, or way off the mark? Let's all find out together!
The Google I/O keynote kicks off at 9:30 a.m. Pacific. We'll be covering the keynote, as well as the rest of the conference throughout Thursday and Friday.
Join us, won't you?
ReadWrite's Adriana Lee is on the scene at the Moscone Convention Center in San Francisco. You can also watch the keynote live on the I/O website.