THE animated turning of pages in a digital magazine, the whir of a camera shutter when you snap a smartphone picture. Designers have a word for such ornaments, taken from the old and grafted onto the new: skeuomorphs.
Detractors say skeuomorphs represent the triumph of familiarity over function. Why make an electronic notepad look as though it is leather-bound?
But their defenders say that’s exactly the point: you may be able to simply swipe through a document, but the riffle of virtual pages is reassuring to newbies.
Now, the advent of textured screens and web pages promises a whole new wave of skeuomorphism: that leather binding will not only look like leather, it will feel like it too.
Such familiar sensations will no doubt be welcome as we get to grips with haptic devices. But skeuomorphs tend to outstay their welcome, sometimes persisting even after their originals become obsolete - like those whirring camera shutters
Surveillance Camera Data Used to Analyze Shopping Preferences:
Millions of security cameras capture constant video at businesses and retail locations throughout the U.S., but for the most part their footage is only useful if someone shoplifts and cops need to check it out. But there’s a wealth of data buried in that video, from customer density to crowd shopping preferences. A new startup can analyze surveillance video to help business owners see what their customers do, in the way websites can easily track online shoppers’ browsing habits. Prism Skylabs, based in San Francisco, installs software on computers linked to existing security cameras. The program uses computational photography — sort of like the Lytro light field camera — to produce images with higher resolutions than the original grainy CCTV video, and then edits out people’s identities for privacy’s sake. Humans appear as ghostly figures or are edited away completely, leaving colorful discs in their place that depict a crowd’s size and density.
Invoked Computing: When Everything is an Interface, Who Needs Interfaces?
Lead researcher Alexis Zerroug explains:
In this project we explore… a ubiquitous intelligence capable of discovering and instantiating affordances suggested by human beings (as mimicked actions and scenarios involving objects and drawings).
Miming will prompt the ubiquitous computing environment to “condense” on the real object, by supplementing it with artificial affordances through common AR techniques. An example: taking a banana and bringing it closer to the ear. The gesture is clear enough: directional microphones and parametric speakers hidden in the room would make the banana function as a real handset on the spot. (…)
To “invoke” an application, the user just needs to mimic a specific scenario. The system will try to recognize the suggested affordance and instantiate the represented function through AR techniques (another example: to invoke a laptop computer, the user could take a pizza box, open it and “tape” on its surface).
…one reason operators hate changing lenses is reportedly because of crippling DRM on Sony’s digital projectors, which “will shut down on you” if a mistake is made when resetting the system. So, they just don’t change them, because serving a ruined product is better than serving no product at all.