Eye Tracking Technology

Eye Tracking Technology

Did we ever expect to have phones taking command based on our eye movements, a video to pause when you look away from the screen, a smart scroll to scroll up and down depending with the movement of users eyes. A major Break-Through by Mozilla has made it all possible in today’s era.

Eye tracking can be defined as a technique that is used to record and measure eye movements. Definition is simple enough, but we always get a follow-up of “how does it record” and “will it hurt”? Eye tracking data is collected using either a remote or head-mounted ‘eye tracker’ connected to a computer. While there are many different types of non-intrusive eye trackers, they generally include two common components: a light source and a camera. The light source (usually infrared) is directed toward the eye. The camera tracks the reflection of the light source along with visible ocular features such as the pupil. This data is used to extrapolate the rotation of the eye and ultimately the direction of gaze. Additional information such as blink frequency and changes in pupil diameter are also detected by the eye tracker.

Like any good technology, eye control is only as useful as the software developers who write for it. Users need to use advanced mathematical models and calculations to determine the point of gaze and set up and fully automate so that it works accurately and reliably in a wide range of environments. The technology still needs to be smaller, use less battery, and cost effective.

Eye tracking can be used in a wide variety of applications typically categorized as active or passive. Active applications involve device control, for example aiming in games, eye activated login or hands-free typing. Passive applications include performance analysis of design, layout and advertising. Other examples are vehicle safety, medical diagnostics and academic research.

Real eye tracking makes it possible to observe and evaluate human attention objectively and non-intrusively, enabling you to increase the impact of your visual designs and communication. What content in your website does not get any attention – and why? How strong is the engagement for your ad – and how do you improve it? In the era of Smart Phones it enables eye control on mobile devices, allowing hands free navigation of websites and apps, including eye activated login, enhanced gaming experiences and cloud based user engagement analytics.

Eye tracking in itself has a few strong use cases that will benefit a few industries ominously, but when it is united with other input methods like touch, speech, GPS, gyroscope, gesture, etc. many niche products and novelties will come up as eye tracking gets associated with our surroundings.

We are at the doorway of a revolution of how we will interact with the surroundings. Devices will be smarter and better aware of human emotions, mental state and interests, and we can provide valued content built on these considerations. The age of only using mouse and keyboard or touch on a screen will end soon.

Introduction To Apple New Language Swift

Introduction To Apple New Language Swift

INTRODUCTION TO APPLE’S NEW LANGUAGE – SWIFT !!

Swift is a multi-paradigm, compiled programming language developed by Apple for iOS and OS X development that has been introduced at Apple’s developer conference WWDC 2014. This language is designed to replace Objective-C, while working with Cocoa, Cocoa Touch frameworks and existing Objective-C code so far written for various Apple products.

It is built with the LLVM compiler comprised in Xcode 6 beta, uses Objective-C runtime thus allowing Objective-C, Objective-C++ and Swift code to run within a single program.

Apple has now laid the foundation for Swift by advancing the infrastructure of the existing compiler, debugger and framework. The big idea behind Swift is that developers can write their code and see the results in real time; instead of writing line after line of code in a text box, then compile those results and wait to see the end result, as in the old paradigm. With Swift, developers can tweak an algorithm (or a parameter) and watch the changes happen right away in the same coding environment. Thus, letting developers to toy with concepts faster and make what they’re trying to make in lesser time.

Swift is anticipated to be more resilient against specious code due to the following notable features:

1. Swift does not create pointers and other unsafe accesses by default. It does not require header files as well. Moreover, the statements do not always need to end with a semicolon (‘;’).

2. There is no need to use break statements in switch blocks. The individual cases do not fall through to the next case unless the fall through statement is used.

3. Variables and constants are always set and array bounds are always checked.

4. A dot-notation style and namespace system (which is more in common with other C-derived modern languages like Java or C#) has been replaced with the Objective –C’s Smalltalk – like syntax for making method calls.

5. A key portion of the Swift system is its ability to be neatly debugged and run within the development environment, using a read–eval–print loop (REPL).

Other musing and interesting features includes multiple return values, Generics, Type inference, Class-like structs, Trailing closures, Operator overloading. Swift is, in large part, a revisualization of the Objective-C language using modern concepts and syntax.

As the dust settled around the introduction of swift language, Mac and iOS developers still have a plenty to ponder and explore, though many have also come up to Swift after voicing early concerns. Swift got a full-throated roar of appreciation from the thousands of developers as it solves the major concern that a panel of iOS and Mac OS X developers had with using Swift in actual production code specifying that even if the language is still under development today, the apps will not break in the future. For instance; if a Swift app is written today and submitted to the App Store, then not only the app will work well into the future with iOS 8 and OS X Yosemite release, rather can target back to OS X Mavericks or iOS 7 with that same app as well. It is possible since the Xcode embeds a small Swift runtime library within the app’s bundle and since the library is embedded, the application uses a reliable version of Swift that runs on past, present and future OS releases.

Lastly, it’s understood that instead of a developing a powerful new language, Apple has actually tried to develop a popular one. Swift no doubt has a lot of features that may seem alien to someone coming from Objective-C, but they eventually make a lot of sense and will be nice to use. For now, we are in the earliest days of the transition from Objective-C to Swift and there will continue to be naysayers.

The Beacon World

The Beacon World

“iBeacon” a recent entrant in tech keyword list is creating a lot of buzz. Let’s dive-in to explore, What they are? How they work? And why they will be important?

GPS services, based on lat/long are great for navigation and pointing locations on maps. But modern day people with smartphones spend most of their time indoors. When it comes to indoor, multistoried buildings and even underground areas, GPS doesn’t have capacity to capture accurate location. So, Beacons are the solution.

Beacons

Beacons are transmitters that use Low Energy Bluetooth technology to communicate with other devices. These are very small devices which come with a battery life of about 3-12 months. They can be easily mounted on walls/ceilings

In layman terms, just think of a battery operated radio, transmitting constant flow of audio signals. However, data transmitted in this case will have prefix, UDID, major, minor values.

Future

The potential isn’t just hype. It’s real, though. Industries like retail, hotel and logistics will encounter completely new ways to interact with customer, staff, etc.

Just think of scenario(s)

1) You walked into store and you get a personalized welcome notification on your mobile device. Also you get the deals & offers based on your interest, purchasing history.

Awesome….!

2) You are at hotel lobby and reception staff is automatically notified about your arrival. Reception staff immediately completes the check-in process. Meanwhile you enjoy your favorite welcome drink, being served by the waitress, without even asking you. She had been notified about your presence and your favorite drink, the moment you entered the proximity. Wonderful….isn’t it….!

3) You are in big amusement park and your mobile app guides you the exact path to your favorite ride, cafeteria, etc…..Wow…!

Rising Curve

Many tech companies are building platforms/SDKs which can be easily used along beacons, like Gimbal, BlueCats, Estimote, Kontakt.io

In five years, iBeacon/Bluetooth Low Energy Device market to reach 60 million devices (Source: Internet)

Let’s wait and watch, how it evolves.

Google Glass

Google Glass Prescriptions

Technology is advancing at an ever-increasing pace. What was latest a year ago is now obsolete and the revolutionary innovations to come really don’t seem all that far off. Touch phone’s and Tablets brought touch interactivity to the ample, but now wearable, voice-activated technology is approaching the limits of what we can do with a machine, both in terms of size and computing power. If you are still not familiar with Google Glass, prepare yourself to re-evaluate what you think a computer is and prepare to be startled by the latest innovation.

What is Google Glass?
Google Glass is a machine that you can wear like spectacles. Like a smart phone or tablet, it can hook you up to just about anything. However, unlike IOS or Android technology, Google Glass presents hands-free, voice-activated interactivity. If you’re concerned by constantly looking down at their mobile devices or if you’ve found yourself wanting to use your mobile device but require looking somewhere else, Google Glass has a solution for this problem by putting a computer display exactly in front of your eye. It sounds unbelievably innovative and not viable, but it’s true with Google Glass, your screen is wherever you look, permitting users to interact with their computer and the world around them at the same time.

What kinds of things can be done with this computer technology?
With the trouble-free audio signal of, “OK Glass” followed by a fundamental command, you can effectively have Google Glass do whatever thing you would have your smart phone or tablet do. You can transmit messages, you can also request Glass to click a photo or capture a video, video chat (with users of Google account), convert your voice and much more. The technology is still quite new, so it’s a safe stake to say that as more customers and techies get their hands on it, more features will be developed.

Is this the future of computers and technology?
It’s always tough to forecast the future. However, we can say that there will be a market for wearable machines like Google Glass in the upcoming years. It solves the problem of every individual looking down and interacting with a smart phone or tablet, eye contact has become limited in recent years because of our addiction on our devices. Using an appliance like Google Glass allows normal interaction with other people around us. Also, wearable technology like Glass is portable and user friendly, which seems to be the trend in how our devices are getting better, thinner and portable. Whenever people have versioned the future of computers, they have imagined a voice-controlled machine responding to our guidelines. That’s exactly what Google Glass does, the user instructs it and it responds consequently. It’s not a robot companion, but the voice activation feels like an ordinary development of technology like Apple’s Siri for iOS. For these reasons and more, it’s a high-quality bet, more and more people will be using wearable computer devices like Google Glass in near future.

Is this technology available to the average person?
Well, yes and no, at present, in order to get your hands on Google Glass, you have to give a good reason to the company why you are valuable. It is called the “Glass Explorers” program, Google’s purpose is to get their product, which is still technically in a beta version, into the hands of community who will use it in resourceful and creative manner. So far, Glass has been offered to teachers, athletes, scientists etc. Recently, Google has extended some additional invitations to more people, but it’s not available for free. The current price is around $1500 – certainly realistic for a powerful machine, but still luxurious and definitely more than the average smart phone or tablet. It is predictable that within the next few years, Google Glass will become more reasonable, allowing more people to own the device.

Technology like Google Glass is extraordinary and it is becoming closer to our early day dreams of future. Google has updated their prototype and now the Glass has become more versatile it is available in a number of colors and with quick adjustments, it can work with sunglasses and spectacles. The device’s robustness has also been improved unlike a pair of sun glasses, Google Glass does not break or go out of shape easily. If you get an opportunity to try on Google Glass for yourself, take it, you’ll be stunned at how much you can do with a device that is almost completely hands-free.

Google’s Knowledge Vault

Google's Knowledge Vault

In today’s world “Search” word is almost replaced with “Google”. Each time, we ask someone to search anything from internet, we say: “Please Google it”…

We feel that Google is the part of our life or in other words; the necessity of our life. Without google, we feel lost in this mysterious world; it helps to search useful information nearly in every arena.

Google has a huge data bank, but now google wants to improvise this data bank as a biggest data center.

Google’s current knowledge base called Knowledge Graph; is used by Google to improve its search engine’s exploration results with semantic-search information collected from a wide diversity of sources. Knowledge Graph display was clubbed to Google’s search engine in 2012. It delivers organized and comprehensive information about the topic in addition to a list of links to extra sites.

Now Google is building the biggest stock of information in human history – a knowledge base that unconventionally gathers and combines data from across the websites to provide exceptional access to all the realities about the world.

The search giant is structuring Knowledge Vault, a type of knowledge base – a structure that stores information that enables a normal person to read and comprehend.

Vault is data management that helps unite, accomplish, and track data formation, simulation & documentation processes for design, engineering, and building teams. Knowledge Graph trusts on crowd-sourcing to multiply its information.

It started structuring the Vault by using a procedure to automatically pull in statistics from all over the web, using machine learning to turn the raw data into operational pieces of knowledge

Knowledge Vault has dragged in 1.6 billion facts till date. 271 million facts are evaluated as “confident facts”, to which Google’s model credits more than 90 per cent chance of being accurate, Scientist has reported.

Tom Austin, a technology analyst at Gartner in Boston, said that the world’s major technology companies are competing to build similar vaults.

“Google, Microsoft, Facebook, Amazon and IBM like giants are also working to build the biggest data center with different categories.

The Knowledge Vault will be the substance for smartphone and automation intelligence. Siri is going to get a lot healthier at interpreting what we mean when we ask questions in the future. The algorithm for Knowledge Vault is not critical and will build information relating to places, people, history, science and popular culture. This will raise some privacy concerns as the program can contact “backstage” information such as data unseen behind websites like Amazon, YouTube and Google+ etc..

In the future, virtual assistants will be able to use the database to make judgments about what does and does not matter to us. Our computers will get improved at finding the information and anticipating our needs. This could provide the means for enormous medical innovations, discovery of drifts in human history and forecast for the future.
Once the Knowledge Vault can understand objects on vision, it will become essential for immediate information generation. One day we might be able to walk everywhere, point our phone at an object, ask a question about it and receive a smart reply.