Is a picture worth a thousand search words?

By Catherine

Written by Catherine Bolgar

Selecting the right Internet search words can be frustrating. But thanks to broader bandwidth and better picture-recognition technology, future searches may be image- or video-driven

“There’s a long history of search engines that have tried to use images,, says Greg Sterling, vice president of strategy for the Local Search Association, an industry association of media companies, agencies and technology providers. “Visual search was seen as more directly delivering information than text. Maybe it was a technology thing or timing thing, but they didn’t quite find the right model.”

As smart phones began reshaping the Internet landscape—some 340 million were shipped in the second quarter of 2015 alone—pre-existing visual search engines such as Grokker, Viewzi and SearchMe floundered. Yet the proliferation of smart phones and tablets may have increased demand because their small screens are more suited to pictures than text.

Visual is definitely one path forward for search,” Mr. Sterling says. At the moment, when searching for a particular product, “unless you have a specific brand name, it’s hard and frustrating clicking back and forth to different sites.”

An image search “will confirm quickly if it’s what you’re looking for, plus provide customer reviews and other product information,” Mr. Sterling says.


However, image search is not so straightforward. You take a photograph and use it to search related information, but success depends on the angle, light and focus of the photo.

“In the future, maybe it will be the case where you snap a picture of a landmark and get all the information about it,” he says. “What’s open for improvement is using a camera to get information. Inputting a 16-digit credit card number into a small screen on a phone is problematic. You mistype. Today, you can take a picture of the credit card and certain apps will recognize it and process it into the form.”

Images by themselves probably aren’t the future. “Look for a mix of images and structured data, finding what images are, finding other related things and organizing that information with tags and other data,” Mr. Sterling says. “There’s more and more sophistication in how you identify and index, with machine learning and other technology that exists behind the scenes that could apply to a pure text or image model.”

Researchers are working to improve the technological foundations for image searches. A group of universities is developing ImageNet, a database of 14 million images that attaches images to nouns.

Meanwhile, Lorenzo Torresani, associate professor of computer science at Dartmouth College in New Hampshire, has helped create a machine-learning algorithm that uses images to find documents. However, only a few users annotate their uploaded pictures and videos, and not necessarily accurately. “The repository is expanding at an astonishing rate, but we can’t retrieve content efficiently,” Dr. Torresani says.

Software can check whether the searched-for objects are in a picture, and if so automatically tags them. “It works, but has limitations,” Dr. Torresani says. “It’s difficult to expose all the content in the picture with predefined classes. And if you use predefined classes, then the search is only accessible through those keywords.”

Another way is to extract some visual features, like a visual signature, that allows users to search by example. Alternatively, software could translate key words into the visual signature, because users are accustomed to searching via text. This would work like language translation software, but translating from text to image instead.

“It could be used to find images or videos that are similar in context or appearance, and link them somehow,” Dr. Torresani says. “It could make the repositories browsable.”

Video is the bigger challenge. “One second of video has 30 images,” he says. “The amount of data we need to analyze a one-minute video is huge. Storage is a problem. Retrieval is a problem. Processing is a problem.”

Yet “even if the recognition process fails on one or two images, we have so many of them and the view maybe changes and the object that was ambiguous becomes clearer later in the video,” Dr. Torresani says. “From that point of view, video is easier than a still image.”


Catherine Bolgar is a former managing editor of The Wall Street Journal Europe. For more from Catherine Bolgar, contributors from the Economist Intelligence Unit along with industry experts, join the Future Realities discussion.

Photos courtesy of iStock

Lighting the Way to Ambient Intelligence & the Internet of Experiences

By Neno


In cities around the world, street lamps are being tapped as an ideal platform for jump starting smart city development. Initially, cities began replacing legacy street lighting with LED bulbs equipped with motion sensors to turn the lights on only when a human being entered the area, boosting the already considerable energy savings of LED technology. Now, cities are realising they can also equip the LED chipboards on these pervasive networks with an extraordinary range of micro-processors and sensors – among them, smoke detectors, noise detectors, pollution meters, seismic activity detectors, weather sensors and smart video cameras – to dramatically expand their lamp post arsenal’s role in shaping the intelligent, connected cities of tomorrow.

For instance, Shanghai recently deployed trial smart street lamps that function as lighting systems, Wi-Fi hotspots, Internet access hubs and city services links. Residents or visitors can use voice commands or a touch screen to get local information, charge their electric vehicles, check local pollution levels, or call for help via an emergency call button linked to the city’s public service platform.

And now, if the International Consumer Electronics Show (CES) is any indication, it seems vendors interested in shaping the next generation of smart homes are also turning to light fixtures as a primary sensory platform. Several startups and incumbent lighting vendors showcased Internet-connected LEDs that are beginning to go far beyond just customisation and remote control of home lighting.

For example, Stack Lighting touted LED lights that can sense motion, ambient lighting and temperature. To realise the value in these sensing capabilities, the lights can be networked with climate-control systems like Nest. However, one of most interesting thing about this lighting system is Stack’s claim that its lights are so smart, consumers don’t need a smartphone app to control their features and functions: once configured, the lights, in tandem with other home systems, simply adapt to the customers’ behaviour and the environment to deliver the right ambient home experience for them.

While it may be up in the air as to which “thing” – if any one thing – in the home becomes the central hub for sensing and control, it was clear from CES 2015 that smart home systems are edging toward a new world of “ambient intelligence.”

Ambient intelligence is a concept developed in the late 1990s to describe an era when ubiquitous computing, networked devices, environmental inputs and human behaviour would come together to in such a seamless way as to render technology wholly invisible, with each human being enjoying an experience that perfectly anticipates and adapts to their unique needs and preferences.

This is the world Alphabet Inc.’s (formerly named Google) CEO Eric Schmidt alluded to at the 2015 World Economic Forum in Davos. Asked for his prediction on the future of the web, he responded: “I will answer very simply that the Internet will disappear.” He went on to explain, “There will be so many IP addresses…so many devices, sensors, things that you are wearing, things that you are interacting with that you won’t even sense it…It will be part of your presence all the time. Imagine you walk into a room, and the room is dynamic. And with your permission and all of that, you are interacting with the things going on in the room.” The result? “A highly personalised, highly interactive and very, very interesting world emerges.”

As CES 2016 approaches, it will be interesting to see if “No app needed!” becomes a mantra of more and more vendors, to see if technology continues to render itself less and less visible, and if people, places and things continue to synthesise into wholly unique, adaptive experiences.

In short, it will be interesting to observe the degree to which the current Internet of Things evolves into its next natural evolution, the “Internet of Experiences.” To learn more about this evolution, we invite you to explore the cover article for the latest issue of Compass magazine, “BEYOND THE IOT: The Internet of Experiences will change the way the world operates.”

Shifting Design Process: The Cassiopeia Camera Experience

By Estelle

Understanding the needs of multidisciplinary creative teams

This Article has been written by Teshia Treuhaft and originally appeared at Core 77

The evolution of design as a professional practice is one regularly impacted by developments in other fields. As designers, we often sit squarely between disciplines, streamlining and humanizing products for greater usability and appeal in the end result.

Never has the requirement to work between disciplines been as important as it is today. As industrial design becomes increasingly interwoven with service design, user experience design, engineering, manufacturing and more—designers must act as the bonding agent for teams producing innovative products.

In an effort to further understand these emerging hybrid teams of designers, managers and engineers, companies are going as far as studying the trend of co-creation to optimize for social ideation and more collaboration. Likewise, with the speed of technology and pace of product development, having tools and solutions that allow companies to build faster is proving a greater advantage than ever before.


In order to research the way teams work from the inside out, Dassault Systèmes put together a creative team to design the Cassiopeia Camera Experience. Cassiopeia is a concept for a connected camera that has the functionality of a digital SLR, and allows the user to sketch over photos and scan objects or textures. The team took Cassiopeia from inspiration phase to design validation, allowing Dassault Systèmes to gather first-hand knowledge of the needs of each team member and design solutions that directly enhance social ideation and creative design among the group.

Cassiopeia Camera Experience

Using this research, it becomes clear as the project progresses through different phases, that the requirements of each contributor change and communication between parties gains complexity. While each phase builds on the next, a well equipped team will be able to regularly come together during each phase for design validation.

We decided to take a deeper look at development of the Cassiopeia project for unique insight into the inner workings of a team—one that is not only building a product but a holistic experience.

Inspiration Phase

The inspiration phase of any product demands input from a number of key players inside and outside the company. This is often done by compiling references in the form of articles, visuals, sketches and more. A product manager typically leads this phase, however every member of the team can provide valuable input at this fledgling stage.

Team gathers references and inspiration to define key functions of the product

Communication at the inspiration phase must support amassing source material and then distillation until a key concept emerges. The inspiration phase is particularly important for connected devices like Cassiopeia. In this case, the design team faces not only the task of designing the camera, but also the connected functionality. The complex use cases and physicality of the product must be developed in tandem during this phase for a unified end user experience.

Ideation Phase

Once the inspiration is clear to the team, the work of narrowing the idea down to a discrete set of requirements is the next step. This ideation phase moves the product from discussion of the concept into a physical form for the first time. For this phase, creative designers are tasked to visualize the product for the team, iterate together and repeat.

Rough sketches gives the product a form factor that can discussed and refined at later stages

Sketching in this phase is essential. It allows the team to understand possible variations and begin to make decisions about a number of factors. During ideation, the ergonomic and functional aspects of Cassiopeia merge for the first time into a rough form factor that can be communicated to the team.

Concept Design Phase

Once the product is visualized for the first time using the 3D sketches, the next step is to model the product at scale. An industrial designer will typically model the product in 3D, testing and refining design variations from the ideation phase.

An industrial designer adds scale and refines features of device. 

With Cassiopeia, this is the phase where shapes begins to emerge and the conversation about the product shifts from conceptual to physical. The goals of the design must be clarified and communicated clearly so that the product can seamlessly transition from a design into a physical object that can be considered from a manufacturability standpoint.

Detail Design Phase

Once the industrial designer has taken the design from concept sketch to 3D model, a design engineer takes the model and considers it from engineering and manufacturing perspective. This shift from design of the device to engineering of the device is a careful balance to retain as much of the original concept for the form factor as possible.

Foresight during the detail design phase offers ease of manufacturing and greater success in the final product.

This is a key matter of communication between the engineer and designer in order to deliver a product that not only is aesthetically aligned with the inspiration – but also can be manufactured. For Cassiopeia, this requires a seemingly subtle but highly important refinement of surfaces and geometry.

Design Validation Phase

In the final step, the team must simulate the product in order to engage in discussion and finalize the design. Design validation occurs both in the final steps and at regular intervals during the development. There are two main forms this validation takes, led by a visual experience designer and a physical prototyper. A visual experience designer will create a number of detailed renders, while the physical prototyper will develop physical 3D models.

Visualizing decisions is essential to engage key players inside and outside the team

For Cassiopeia this is a key phase as the camera has a number of complex parts, surfaces and functions. Regular design validation throughout the process gives access to all members of the team to make decisions about the final product. When collaboration is managed well, the multidisciplinary team will arrive at the validation phase having shared expertise at each step of the design process. As a result, the final prototype is a true reflection of their shared vision and is reached more quickly than ever before.

The development process of any electronic device is challenging for teams looking to innovate in their respective spheres. As consumer’s expectations increase for well-designed objects that provide comprehensive product experiences, the ability of teams to collaborate and move quickly will be increasingly valuable. The extent to which teams can effectively collaborate will be a defining factor for success – both for the team and the products they create.

To read more about Dassault Systèmes Solutions and Social Ideation and Creative Design, check out their website and webinar.

Page 1 of 25112345...102030...Last »