Towards Language-Aware Interfaces – Presentation at CHI 17

Towards Language-Aware Interfaces – Presentation at CHI 17

With an increasingly globalized world, the language barrier problem becomes more prominent. And thus, inhibits proper interaction between not only humans but also information interfaces in the respective country. Navigating an interface in an unfamiliar language can be challenging and cumbersome. More often than not, poorly accessible language menu are of little to no help. Implicitly inferring a user’s language proficiency helps relieving customer frustration and boosts the user experience of the system. The following image shows a sketch of such a language-aware interface.

Report from IEEE PacificVis 2017

Report from IEEE PacificVis 2017

In the second half of April, I had the pleasure to attend the 10th IEEE Pacific Visualization Symposium, usually called “PacificVis”, that was hosted this year by the Seoul National University in South Korea. I gave a talk at PacificVis about our Notes paper “Implicit Sphere Shadow Maps”. It presents a way to render high-quality soft shadows for particle data sets in real time.

Accumulation of Sensory Evidence in Self-Motion Perception

Accumulation of Sensory Evidence in Self-Motion Perception

Within the research group Cognition & Control in Human-Machine Systems at Max Planck Institute for Biological Cybernetics in Tübingen, we want to study fundamental principles of human perception, and translate them to a variety of applied fields, including the design of virtual environment. One of our research interests, and topic of today’s blog post, is the perception of self-motion.

How to use Crowdsourcing for Research?

How to use Crowdsourcing for Research?

Members of the SFB TRR 161 have recently participated in an “Workshop on Crowdsourcing” at the University of Konstanz. The organizers, Franz Hahn and Vlad Hosu, introduced the use of CrowdFlower for quantitative user-studies. The intention was to get participants familiar with the platform and the basic concepts of crowdsourcing for user studies. All participants were able to design and run their own hands-on experiment, to get a better feel of the challenges and benefits of crowdsourcing.

Combining Shape from Shading and Stereo

Combining Shape from Shading and Stereo

Inferring the 3D shape of objects shown in images is usually an easy task for a human. To solve it, our visual system simultaneously exploits a variety of monocular depth cues, such as lighting, shading, the relative size of objects or perspective effects. Perceiving the real world with two eyes even allows us to take advantage of another valuable depth cue, the so called binocular parallax. Because of the slightly different viewing position, the images projected to the retinas of both eyes will be slightly different. While objects close to the observer undergo a large displacement between the images, objects that are far away exhibit a small displacement. Because nearly all this happens unconsciously, we usually do not realize how tough this problem really is.

Eye Tracking and Beyond

Eye Tracking and Beyond

Humans rely on eye sight and the processing of the resulting information in more everyday tasks than we realize. We are able to solve moderately difficult quadratic formulas in our head when taking information about a flying ball and aiming to hit it with a baseball bet at incredible speeds. We are able to use visual information about dozens of cars to navigate when driving a car in unknown streets. Our eyes calibrate to the lighting conditions allowing us to navigate broad daylight just as well as dimly lit rooms. Beyond that, we can use information about depth of field, color, tint, and sharpness. In fact, it is often said that over 50% of the cortex, the surface of the brain, is involved in vision processing tasks. This makes vision one of the most relied upon sense. Consequently, understanding what drives our eye movements may be a key to understanding how the brain as a whole works.