Tuesday, November 18, 2014

Molecular copy machine

Today there are ways to measure the 3D surface of molecular and atomic structures, e.g. atomic force microscopy (AFM) that scans a surface and based on the force exerted on its tip, it can reconstruct the structure of the surface. The same AFM can also induce force, or through running current through it, bleach or modify a surface of specific materials.

I suggest connecting two such AFMs, where one is the reader and one is the writer for a molecular copy machine. One scans a surface and immediately transfer the 3D information to another one, which bleaches or changes the surface underneath it with the same pattern.

An ever more amazing goal, although I’m not sure it is feasible today, is to copy the actual 3D structure, i.e. not modifying the surface, but adding to it the appropriate number of atoms so as to achieve the same 3D surface. Probably a better way to go about it is to etch the negative of the scanned surface onto a new one.


Can this be made digitally, in the sense of high fidelity copying? If so, that can be a revolutionary way of storing data, art and communication, in the 3D surface of atomic structure.

Augmented Vision Part II – Feeling air

Today there are intricate models of air-flow, even in urban areas. Furthermore, based on the current weather measurements, there are extremely detailed pressure, humidity and temperature information of almost every place on earth. Wouldn’t it be great to actually see and experience them un-filtered?

I suggest to have an augmented reality device, e.g. Google Glass, that can present the current atmospheric situation at the place you are now. Seeing the actual air-flows, humidity and pressure variations, temperature gradients. This information can instantly connect a person to the surrounding environment.

It can also help scientists and researcher understand their models of air-flow. By combining the augmented reality not to models, but rather to in-place sensors, one can “go around and measure the flow of air with one’s own eyes”. That can be a unique experience and I propose one that can introduce new appreciation and sensation of the environment and its intricacies.


Another option to augment vision is with another sense, for example tactile information. There was a recent work where computational vision increased the sensitivity of movements, ones so minute that could not be seen by the naked eye. One can do this with air-flows, in the same context of walking and seeing air-flows, one can combine a wearable bat (see my previous post), such that detection of minute air-flows and winds are increased and are felt via wearable vibrators on the cloth. This can create a fully immersive experience: seeing air and feeling its flow.

Monday, October 6, 2014

3D letter based toys

Inspired by WordWorld where words come alive, the basic concept is that words create the thing that they represent. Thus a dog is created and shaped by the letters D-O-G. Every object in that world is made up, literally, from the letters that name it.

I propose designing such toys that are made up of the letters that make them. Thus, for example, a car will be made up of the letters C-A-R. The challenge here is two-fold: (i) designing toys from the letters, which is relatively easy and (ii) designing the letters to fit several toys. Thus, for example, C should be part of both C-A-T and C-A-R, yet still have a single shape for both.

Hence, I propose an automatic designing algorithm that receives a list of 3D-models of toys, or at-least their 3D-silhouettes, and the letters that make them up. The output of the algorithm is a 3D design of the letters and their attachment, such that one can create the toys simply be attaching the letters together.


The real challenge will be when presented with many toys, such that each letter appears at least twice or thrice. Then a single design of the letter should be the output of the algorithm, such that all the toys can be made. This is a truly constrained problem that should prove to be a challenge to 3D-designers. The reward would be cool educational toys that can be 3D printed by everybody. Good luck.

DIY motorized home

There’s a new trend in the world, called “Internet of Everything”, which, briefly stating, means that everything will be connected to the internet, from your devices, to your refrigerator, car, home, everything. This is a cool idea but I think something is fundamentally missing from that concept and that is action, motorized action. In other words, while everything will be known or perceived and shared via the internet, nothing will actually happen in the real world, since there are no motors involved.

There are attempts to create a fully automatized houses, where everything that can move is motorized, e.g. doors, drawers, maybe even chairs. These are extremely expensive attempts and are implemented in specific, research-oriented houses.

I propose to create a Do-It-Yourself motorized home, by designing a motor-box that can be applied to anything that can move in the house, e.g. cabin doors, front doors, drawers, etc. The box will contain the following components: (i) a motor; (ii) a hinge-based driving system; (iii) a circuit-board with the motor controller and a wi-fi board to connect to a local network; (iv) an easy-to-remove battery pack. The whole box should be relatively small, to fit inside drawers and such.

When a box is installed inside a drawer, the hinge-based driving system is connected to the moving part and the box to the fixed part. With a local ip-address that transmits and receives the current position of the motor, one can control the motor. The removable battery pack is set to make it easily installable and replaced.


Consider a house filled with these types of boxes, wherein all the doors, drawers and other movable objects can be totally controlled from afar, either via your smartphone or even from your office. For handicapped people that could make all the difference and for healthy people that could simply make life easier and cooler.

Wednesday, October 1, 2014

3D cubism

One interpretation of cubism art is the projection of three-dimensional objects on a two dimensional canvas in an abstract way. Nowadays, we have 3D scanners and 3D printers that can render 3D objects in extremely precise ways. The abstractness is no longer needed.

However, there is another dimension, now, isn’t it? What about time? I propose combining 3D technology with video such that one films an object in time, i.e. create a video and then 3D-print it, where the 3rd dimension is time, not the physical 3rd dimension. Thus, the produced object now completely depends on the camera point of view and not merely on the object itself. Many kinds of objects can be created, from a rotating view, a distancing view, or a hand-held free-form view.

To create truly cubistic 3D abstract art, one can take snapshots of the same object from different angles, and thus create a virtual movie of that object. The 3D-printed construct will have abstract shape at its z-dimension, composed of the different angles of the real 3D object.

Then, just as old-fashioned 2D-cubism, the observer is left puzzled at what the actual object is.

Thursday, September 25, 2014

Evolving the infinite game

The “Infinite Game” is a game whose purpose is to never end. In other words, a successful game is one that does not end and if it does, you lose. It was brought up in the context of evolution and technology. Furthermore, the concept brings about not just that the game does not end, but that the game increases in diversity, complexity and specialization, just like evolution and technology. In other words, the game’s purpose is to have more of it on any “complexity”-dimension.

Another ingenious idea, by Schmidhuber, is the concept of evolving the rules of the game such that it will be enjoyable. The novelty here is the definition of what is an enjoyable game. According to Schmidhuber’s theory of fun, an enjoyable game is one that is learnable, i.e. that it is hard to learn but feasible to learn. He then evolves the rules of the game such that learnability is increased, as measured by how long it takes for a simulated player (a neural network) to learn the game.

Let us combine the two ideas. Now we want to evolve a game whose purpose is to never end, and yet for it to be enjoyable. This means that the rules must not only evolve but also change with time, otherwise it will not be an “infinite game”. Hence, the evolution will have to take care of comparing the rate of learnability, i.e. how fast can one learn the rules and understand the game in order to advance, and the rate of rules’ change, i.e. how fast do the rules change such that a good player becomes not so good again.

This concept of evolving a game to be both fun and never ending leaves out an obvious concept of games: actual winning. Here, as in Schmidhuber idea, the game can end and can be “won”. But now, the game’s desire, not the player’s, is for it to not end. There is thus a balance between the player’s goal of ending the game and winning and the game’s goal to continue and challenge the player.


If evolved correctly, this can result in a rule for how rules change. In a more complex setting, this rule can be related to the progress of the player, her learnability and her adaptation to the very rule that changes the game. Yep, this is a circular thing that can go into your head, but create super cool games!

Saturday, September 13, 2014

Close-loop body

A combination of new emerging fields may enable the opportunity to "close-the-loop" on our own body. The first is "personalized medicine", which promises to tailor each person the specific medication she requires based on a full analysis of her genome, epigenome, proteome and other –omes. The basic tenet of this new and exciting field is that by knowing each person's genetic and protein make-up, we can better design a medicinal treatment for each illness.
The second field is "self-monitoring", which at its extreme results in taking a blood sample each day and completely analyzing its cellular and molecular content. In an ambitious self-experiment, a researcher (forgot his name) did that on himself for several weeks. One finding, for example, was that these measurements indicated a flu he had much earlier than any other symptom he felt. Furthermore, the decrease of prices for such tests, following Moore's law, can results in each person being able to administer such measurements on a daily basis.
Finally, the field of "specialized consumption" (I made this name up) claims that one can drink a single fluid that contains the entire bodily requirements of the human body. No other food sources are needed to survive and thrive. These liquids, while being probably not that tasty and expensive, enable a much tighter control over what goes into our body.
Combining these three fields I suggest an experiment to try and close-the-loop on a human body. By this I mean that a person consumes only the aforementioned liquid, thus fully controlling the input to the body, while at the same time having a complete monitoring of the body functions, via the aforementioned measurements. Closing-the-loop here means changing the contents of the input liquid based on the results of the measurements in an attempt to reach some kind of equilibrium.
Obviously this is not an easy experiment, but it is not dangerous or harmful in any way. The results would also obviously be different for each person, but by closing-the-loop one may attempt to "conform" the body function.

If this experiment works, it means a whole new level of human existence, in the sense that one can have a full, and rather straight-forward control over one's own bodily functions.

Monday, September 1, 2014

Objective news assessment

News come in all forms and shapes. While most filter them according to content and subjective assessment of interest, why not try and introduce some objective measures of each news item? I propose several relative easy and simple to implement criteria of assessment by which news can be categorized and then filtered or segmented.

The first is the number of people it reports on. While this seems like a crude measure of importance, a news item related to one person that got rehabilitated (no matter who that person is) objectively is less important than famine influencing several million people. By tagging each news item on the number of people reported on (or a rough estimate thereof), one can slice news to people-importance.

The second is monetary. The cost of the item reported, either in tax-payer money or cost of reconstruction, cost of change, influence on GDP, etc. This gives a rough estimate on the financial effect of the news item. Mind you that not only financial news should be assessed by this measure, but rather every news item. Thus every natural disaster or car accident, every joyous event or festival should be tagged with a monetary measure, such that the influence of the reported item on finance can be immediately filtered.

The third is the political hierarchical level of the news item. By this I mean how up does the news item reach? Obviously, presidential reports are pretty way up, but senators, mayors and even local news can be rated according to objective measures of hierarchy, e.g. how many people between the reported people involved  and the president. This way one can assess the political influence the news item has.


One can introduce other important objective measures of news items, e.g. technological measures, social measures etc. This way, one can filter, select, evaluate and relate to news items not only based on content, but actual measures of influence. This way news media can be assessed on the importance of their items. Obviously, this is not intended to replace the content filters but rather augment them. It would be nice to see how much prime-time news are actually about important things.

Sunday, August 31, 2014

Anticipatory headlights


When you drive in the dark, every little help to navigate in the black surrounding is most appreciated. Obviously, headlights are your prime tool. However, when you take a turn, even without any obstructions, in the dark you have no idea where you're driving to. This is from a simple reason that the headlights light in front of the car, whereas you're looking sideways, to where you are going to be.

Research of rodents can shed light on this kind of problem. It was shown that rats, who use their whiskers (their long facial hairs) to feel their surrounding in the dark, actually sweep the whiskers in the direction they are going to be, and not only where they are at. When they start to move their heads to the left, the left whiskers spread more and whisk more in anticipation of detection of obstacles for where their head is going to be.

Why not learn from rats' whiskers to cars' headlights? I suggest that when you either use your signal lights, or start to turn the steering wheel, a mechanism will shift the headlights to where you are going to be, and not just in front of the car. This way, the driver will be able to see what's coming ahead and not just bump into it. Hopefully, this kind of contraption can also save lives in the dark.

Wednesday, July 30, 2014

Automatically adjusted tires

It always fascinated me how there are two completely opposite types of movement on the ground: skating/sliding and wheels. The former requires no friction while the latter requires maximum friction. In the latter, tires are designed to have maximal friction with the road to get a better grip and reduce the sliding that may occur and cause loss of control over the car. However, tires are fixed while the road may change. There are different types of tires, if you know which road you're going to be driving on. I'm not talking about snow or ice, but rather on different types of roads like asphalt, dirt, concrete. The tires could be optimized for each one but not for all of them.
Nowadays there are materials that can change their shape and other mechanical properties very rapidly, usually through electrical current, such as shape memory alloys. I propose embedding these materials into tires and making them automatically adjustable to the road driven. One example on how to do that is to control the grooves on the tires, which lend them their grip. By lining these grooves with shape memory alloys, one can adjust their width and shape and thus change the mechanical and frictional properties of the tires. If you can thus switch between a tire that is good for asphalt to one that is good for dirt, automatically, then the grip on the road will remain optimal at all times.
A question rises on how to ascertain the type of road the car is on. One can either do it manually, by adding a dial on the dashboard to control which tire is now "on". Another way is to embed some sensor either on the tire, but probably preferable on the car, that detects the type of road and automatically adjusts the shape of the tires' grooves to the optimal one.
The last issue is the electricity required for the change. Usually, these materials require energy only upon changing from one state to the other, but even if not, the car has battery and the amount of electricity required is minimal, compared to other requirements in a car.

To conclude, one can not only optimize the materials and shape of the tire, but combining both one can enjoy all worlds of dynamically optimizing the tires' grooves' shape such that they will give the best grip and friction for the specific road traveled, thus increasing the driver and passengers' safety.

Friday, July 25, 2014

Pulsed-taste Candy

In recent psychological studies, Dan Arieli has shown something that is pretty obvious and yet powerful. It is better to have good things with breaks in the middle; that's how you enjoy the good things more. Each start is important and you do not habituate to the "goodness" of it, and thus making it boring. (A side note is that the bad things should be lumped together, to get them over with and to habituate to them. Something most people don't do).

Integrating this concept to candy, which is (I believe "by definition") a good thing, results in something that is lacking in the candy repertoire: a pulsed-taste candy. This means that the good taste of the candy should come in pulses, with breaks in the middle of neutral taste. Candy usually are "optimized" to either very good taste in the beginning (which sometimes have some after-taste), or "lasting taste", which habituates the usefulness and enjoyability of the taste.

My proposition is to engineer a candy that has good-neutral-good-neutral-etc. sequence of tastes. How to do that? I can think of two possible methods. The first is layered candy, meaning that the candy has several, hopefully more than four, layers that are either licked or sucked. Thus, each layer is exposed in a sequence slowly during the process. The key here is to have neutral-taste layers, such that the mouth (and us) "forget" the good taste and do not habituate to it. The next layer, after the neutral one, is again delicious, hopefully in another taste that again triggers our enjoyment.
The second option is to chemically engineer the substrates of the candy, such that the decomposition in the mouth, due to the enzymes that degrades the candy, release different tastes in a sequence. This is obviously much harder, but I believe still plausible way to go about it.

The point in both of these is to have long-lasting enjoyable taste and the good feeling of a tasty candy. I believe that my proposition is better equipped than simply have a long lasting single taste.


To summarize, good things should be pulses: good-neutral-good-neutral. I suggest to extend this concept to candy such that tastes will be better appreciated. Bon appetite!

Monday, July 7, 2014

Crafted Umbrellas

From a little survey of how umbrellas look and function, they are pretty much the same. A round, folding canvas above you head. Below are several suggestions for different designs, technology and functions of umbrellas.

Asymmetric umbrellas: why are all umbrellas symmetric? It's probably because the folding design. However, their function should not be symmetric. If you have a large backpack with a precious laptop in it, you want a lot of protection on back; much more elaborate than for the face. Also, if you're walking with your umbrella-less friend, you want to be able to extend the umbrella. I propose asymmetric umbrellas: oval shaped either on the front-back axis for protection of backpacks, or sideways-axis for a more-than-one person umbrella.

Hardened umbrellas. One of the most annoying things is wind, which usually accompanies the rain. However, most umbrella designers seem to have forgotten this basic fact. Umbrellas always fold the wrong way or completely brake in the very first squall. I propose hardened umbrellas: there are materials today that can change their hardness upon electrical current. Wouldn't it be nice to have a button on the umbrella that can harden it completely and then make it flabby again when we fold it? Better yet, plugging in a wind sensor could make this shift automatic, to have the optimal hardness given the current wind condition.

Water-collecting umbrellas. Another annoying aspect of umbrellas is that they protect you from the rain, but then water drips from its edges, usually on your feet or backpack. Why not have a drainage tube on the rim of the umbrella and either collect the water or make it flow into one constant flow, instead of every-which way?

These are just a handful of design projects to improve the lives of all the wet people.

May you have a dry day!

Saturday, July 5, 2014

Renovation with Technology

I walk along Cambridge MA and see very beautiful old houses. I figure there are many of those scattered in the US and old Europe. While the view is charming and nostalgic, I'm pretty sure the inside of these houses are problematic at best, due to old plumbing, electricity and general maintenance. Obviously, one can tear down the house and build a new modern one, but that will break the style of the city and will leave a gaping hole in the fabric of nostalgia.

I propose a project that will both renovate the house and keep the style together. There are technologies today that can easily create a full 3D reconstruction of buildings out of mere photos taken from many sides (e.g. Building Rome in a Day). Furthermore, there are other very cheap technologies that use range sensing that can map the 3D structure of the house, e.g. Kinect. I propose to take an old and beautiful house and create a full 3D reconstruction of it. Then, design a fully modern, fully technologically equipped house that is both energy efficient, green and carry the most advanced technologies of the smart home, with the "minor" constraint that the outside appearance of the house would be exactly the same as the old one, with the failing paint job and rusty looking frames. Modern paints can have the look of old one but still be completely functioning in any weather. Moreover, by designing the house to be modular in every aspect, i.e. accessible panels for electricity and plumbing, light and strong interior walls, etc. one can build a house that will keep the appearance of old but be continually renovated as technology progresses.


This is a purely architectural and design project, but it holds the future of keeping the beautiful landscape of old towns with a renovation that is based in technology: living in an old-looking house should not be equivalent to living in an old house. Technology can definitely fix that.

Saturday, June 14, 2014

Wearable bats


A new trend in technology is called "wearables", i.e. embedding technology into our clothes. These can be sensors, such as temperature, cameras, etc. and it can be other things, such as monitors, screens, buttons, etc.

I propose embedding the sonar sense into clothes. This can be done by incorporating ultrasonic range sensors, which are pretty simple, cheap and light, into the clothes in many directions. While they will not compete with Google's autonomous car apparatus, they can give a general sense of range to nearby objects. Then also embed light vibration actuators in the cloth, such that it vibrates more when things are nearer.

An optimal apparatus would be a single small-scale unit that on one side senses range, and on the other has a small vibrator. The latter can be a shape-memory-alloy or a small piezo that vibrate when current pas through them. The latter can be miniature ultrasonic sensors. Then embedding many of these units seamlessly into the clothes in front and on the back.


While it may seem to be very uncomfortable to have vibrations on our skin all the time, the human body and mind is highly adaptive and it can quickly habituate to the uneasiness, while adapting a new sense like bats. It has been shown in many sensory substitution devices, that people can learn to switch or acquire new perceptions like this. The novel thing about this project is to make wearable and thereby easy to don on and off and have a large range of sensors/actuators.

Friday, May 9, 2014

Beeping cars

Cars communicate with people with only a single auditory channel, i.e. the horn. Not only is it a disturbing sound, there are also many meanings to the horn. It can signal danger, "the signal-light has turned", "I know you", "our team has won", etc. This limitation causes not only noise-pollution, but actually a dangerous situation, where people who signal "hi" with the horn cause other drives to turn their head in search of danger.

I propose expanding the auditory channels of cars. Why not have multiple buttons on the steering wheel so that each signal a different meaning: a "honk" for danger, "beep" for signal-light, "Scooby-do" for "hi, I know you". This can even start a whole chain of car ringtones. It will enrich the auditory environment and alleviate the scary stereotypical noise of a crowded city.


Obviously, there are some safety and regulation issues that must be addressed. We don't want people to start making all kind of noises all the time. But as people are now not honking all the time out of respect, fear of the law and social pressure, I believe that they will not "beep" and "Scooby-do" all of the time as well.

Sunday, May 4, 2014

Augmented Vision Part I – Seeing beyond time

What happens when you take Google Street-view, Google Glass and time-lapse photography? For those few who don't know, Google Street-view is a software created by Google that captures images from many, many places in 3D format and allows people to virtually travel in other places, as if they were there. Google Glass, on the other hand, enables virtual-reality like experience, where images and information are displayed on the Glass in response to either your commands or events that are detected in the view captured by the camera on the Glass. Time-lapse photography is the art-form that captures pictures every X-second/minutes/hours and then combines them to a single video, thus allowing us to see things that happen too slow, e.g. traffic of cars over an entire day, formation of clouds, blooming of flowers.

The concept of Google Street-view is to have people from different places to experience a specific scene. The concept of Google Glass is to experience the scene they are in in different manners. The concept of time-lapse photography is to experience a specific scene in a unique perspective.

I propose the following research project. Select a place with trees, buildings and an open view of the sky and repeatedly record with Google Street-view the same place, over and over again, every minute, in a time-lapse fashion. Then create a 3D time-lapse scene of the entire place, showing the movement of the leaves, the weather formation and the buildings' dynamics. Now, enable people walking with Google Glass to view the scene from the time-lapse perspective. They can literally speed-up time and see how things around them behave in a completely new and unique fashion. Because it is captured by Street-view, the scene is completely immersive and because of the Glass the video is reactive to the view-point of the user.


If Google does not want to supply Street-view for this project, it can be done in another, automatic way using only Google Glasses, and the more Glasses the better, using the concept of Building Rome in a Day. That project took Google Images of tourists in Rome and reconstructed the entire old city of Rome in 3D, aligning all pictures and extracting the 3D information out of them. Now think of several dozens of people walking by in a specific area, wearing Google Glass and constantly recording the view they see. One can then take all of these videos and reconstruct not only the 3D scene, but actually the 4D scene, meaning also the dynamics of it. And since people pass specific places at different times, we automatically get time-lapse videos. Then, resending the same people the reconstructed videos from all other people's recorded views, will enable them to see the place they're walking in with enhance speed – seeing beyond time.

Friday, April 25, 2014

AURA - AUtomatic Research Acronymizer

A discovery, project, tool or model is measured by the coolness of its acronym. There are BRAIN, and MUSIC, STEM and ICARUS and the list goes on and on (check this out). But finding the right acronym is not that easy. A short personal story: I once worked on a quantum mechanical problem called Entanglement Sudden Death, ESD. It's a macabre name for a really cool phenomenon. However, I showed that entanglement, even after its sudden death, can be brought back to life. So I really wanted a cool acronym for it and I found the acronym ESD-CPR (which usually stands for Cardio-Respiratory Resuscitation). So I had my acronym, but I had to find what it stood for. After hours of agony I came up with Controlled Partial Resuscitation, which is actually a good description of the phenomenon.

Wouldn't it be nice if there was an automatic tool that, given the short description of the research, gives you a cool, relevant acronym? AURA - AUtomatic Research Acronymizer. The problem is obviously, NLP (Natural Language Processing), which requires to "read" the research description, "analyze" what is written and "come up" with the correct acronym. However, related tools are already out there, such as the HAHAcronym


I believe that combining tools such as the tool cited above (a database of known acronyms and an algorithm for generating acronyms), as well as semantic networks such as concept net, that will accommodate the network of scientific jargon, can facilitate this very important, highly influential and greatly required tool.

A completely folding bike

The trend of folding bicycles is growing, due to urbanization, high density of people inside cities and the wheel to reduce pollution (yes, it is a pun). There are many types of foldingbicycles but they all have one thing in common: the wheels are not folding. If you look at pictures of folding bikes you'd notice one of two things: either the wheel dominates the size of the folded bikes, such that you see a large circle with some bars and hinges attached to it; or the bike's wheels are very small, reduced in size exactly because of this problem.
On the other hand, wheelchairs have progressed much more due to the large demand and the subsidized market. In this unique apparatus, the concept of folding wheels is now possible, with several competitors out there. 

Why not combine the two? Why not enjoy the world of folding bikes as well as folding wheels, to reduce even further the size and complexity of the bikes to a minimum? This can be done, while still enjoying the benefits of a large wheel.

And while we're contemplating "revolutionary" ideas on folding and bikes, why not do it an automatic unfolding and folding? The idea of inserting some kind of engine into the design to supply the power for the transformation is prohibitive, since then the bike will need battery, and the complexity and weight will dramatically increase. However, there are other possibilities. One can be taken from the simple idea of a folded springy ring, that are ubiquitous in IKEA children toysIs there a possibility to construct a formidable bike from springy materials that can then carry the weight of a person on them? 

The design of new materials is reaching a turning point where flexibility and hardness can transformed with a turn of a switch Maybe using such materials and concepts from folded wheelchair's wheels one can created a truly and wholly folded bicycle that can be carried in your purse.

Friday, April 11, 2014

Conductive tattoos


There is a new trend called "wearables", which means technology that is worn as clothes. Some sensors can now be embedded into clothes and they monitor your heart rate and skin conductance which relates to anxiety. Other wearables are buttons and displays that are sewn into clothes, and will hopefully make cellphones obsolete, i.e. everything will be in your clothes.

I want to suggest an even more radical wearable, i.e. the conductive tattoo. Recently a new form of designing electronic circuits has been introduced, namely, conductive ink. With this contraption, you can simply draw a circuit and make it work. You can draw whatever circuit you wish, augmented with some electronic devices, e.g. LED and batteries, and you simply have a working circuit.

What will happen if you make a conductive tattoo? Suddenly, your body acts as the circuit and you can connect your wearables whichever way you like. You can close a circuit by simply contorting your body, moving your hand, touching your ear, etc. More elaborate circuits require more elaborate tattoos, but hey, after Prison Break, that's probably very popular.

There are two main concerns with this project. The first is the toxicity of the conductive tattoo. While regular tattoo are also not that safe, putting in them conductive materials is probably not that healthy. One should first come up with a bio-compatible conductive ink. The second is the extra electronic that you actually need to make the circuit run, e.g. batteries. However, these can be worn on the body, e.g. wrist bands, and the rest of the tattoo closes the circuit.


I agree that this idea doesn’t make much efficiency-sense, but wouldn't it be cool to see a tattooed contortionist make a body-electrical-display?

Monday, April 7, 2014

Singing as the optimal educational medium - Part III Education

In the last two posts I have claimed that singing is an optimal medium for conveying information, both from an engineering perspective and a neuroscience perspective. I want to tie it all in with education. I believe that due to the points raised before, mainly that singing conveys the optimal combination of language (content) and melody (affect) and that our brain is wired to remember dynamical patterns apparent in singing, one can use singing to educate and teach in an optimal manner.

First I want to make a distinction between what I think can and cannot be taught with singing. Singing is optimal for teaching content-driven material, such as history lessons, as well as higher-level concepts, such as philosophy, law and basic science. However, I believe it is less adequate for teaching repetition-requiring and manual dexterity material, e.g. writing and algebra. Whenever there is knowledge to be learned, singing can form a "wrapper" for that knowledge which augment it in several ways: (i) it can imbue affect into the knowledge, which is usually lacking in dry written textbooks. For example, historical events are immersive in emotional content and singing about them can strike a chord with our emotional neuronal circuits to form more long-lasting memory of the knowledge itself. This is obviously nothing new, as ballads of fallen heroes and great events have been common in the Middle Ages as an effective medium of passing knowledge. (ii) The melody and dynamical patterns of singing can create a cohesive web of knowledge, so that events or facts are tied together via the melody. The simple a-b-c song is a great example of this fact.

As written above, these ideas are not new, but I believe they were forgotten or neglected in recent decades, as the written word has claimed priority in our educational system. I propose a project of re-introducing singing in the educational system in a more structured, scientific-based and optimal manner. This can be done by a comprehensive reformulation of the educational curriculum to the singing medium. In other words, one can take the required body of knowledge and write and compose songs about each and every "fact" in it, in such as a way so as to utilize the known (or researched) relationships between melody, emotion, memory and content to enhance memory and understanding. History lessons are a good place to start, although basic science is just as adequate.

I must stress two things. I do not suggest that singing supplants the written word, but augment it to enhance memory and enthusiasm about the material. Furthermore, several things, e.g. writing and algebra, cannot be taught in this way since they must be practiced over and over again. Nevertheless, their principles can be taught, in my opinion, via singing which will ease the pain of practice. Furthermore, I believe we can all agree that hearing songs is must nicer than reading and memorizing endless texts.

Finally, I want to share the wonder of They MightBe Giants, which I learned to love due to their passion to the topic discussed about. This shows that while optimality and research are important (and lacking) qualities in this specific field, the performing arts are just as crucial.

Friday, March 28, 2014

Singing as the optimal educational medium - Part II Neuroscience

I want to analyze songs and signing from a neuroscience perspective and show that it has all the qualities to make an optimal educational medium. Songs has several unique cognitive aspects. The first is its sequence-memory aspect and by this I mean that once you know a song, even hearing the first few notes, you can recite the song completely. Even more than that, if you hear a note in the middle of a song, you can continue with no problem reciting it. However, it is almost impossible to recite the song backward in time, i.e. from start to finish. In order to do that, you need to re-sing the song from the beginning each time. The second aspect is song's catchiness and by this I mean our extraordinary ability to remember songs from first hearing. This field has been studied, but I'm not certain as to the degree we fully understand why (more on this below). Songs, in this aspect, are very different from poems or any other non-musical verbal medium. It is the rhythm and fluency of the song that makes them so memorable, in my opinion. I propose a project to find out the exact border between songs and poems, in that how much rhythm is required for us to learn and remember them.
The next aspect is the neurobiological one. Memories are created and recalled by patterns of activations of neuronal cell firing. There is a vast network of neurons that form an intricate and complex connectivity pattern and the sequence of activation determines the memory being recalled. Note that a specific neuron spiking may be enough to cause a recall, or other behavioral outcomes, but the memory itself is stored across a plethora of neurons. I believe that singing has that unique aspect of optimal recall due to their rhythm and fluency, mentioned above. I believe that opposed to a story or a poem, the temporal aspect of songs somehow resonate with the neuronal activation patterns. Furthermore, as opposed to music, which may also be unique in the former aspect, we have quite a large chunk of brain associated with language. I believe that this unique combination of a verbal and a rhythmic component is what makes songs so memorable. I propose a project in which activation of neuronal cells resonate with their inherent network activation and show that in the human auditory cortex and language areas, these activations occur only, or mostly, during singing.

To sum up the neuroscience aspect of singing as the optimal educational medium, I believe that for some bizarre reason, our brain is wired such that songs are optimal in the encoding and retrieving of memory. This grants us the opportunity to use them as an educational tool, if memory is required.

Saturday, March 22, 2014

Singing as the optimal educational medium - Part I Engineering

Audio signal can be presented as what is called a spectrogram, which is a way to show the distribution of audio frequencies over time. Along the x-axis is time, and y-axis is frequency, where color codes for intensity. Think of it as the equalizer bars you have while playing your favorite mp3, smeared over a page. This representation is very common in speech recognition and other audio-based analysis.
Music

Speech

Now think of different types of audio signals that humans use to convey messages. The first is speech. Speech is compose of short utterances, called words (dah!). However, when viewed in the spectrogram they look like vertical stripes, i.e. short in time, but complex in frequency. On the other hand, music without words are the complete opposite. They are long in time but narrow in frequency, since they represent something closer to "pure tones". Singing, i.e. speech with melody, is just in the middle.

What all of this have to do with "optimal" and "education"? In signal processing there is a notion called a "compact set", which is the minimal set of features you need to have in order to convey information. In other words, with these features you can code the most information and then send it to other people. It is "compact" because just a few features can generate a lot of information. It is "optimal" in the sense that you cannot have the same number of different features that can convey more information; any change in the features will result in loss of information. What are you talking about???

The project I'm suggesting is to analyze singing in the context of a compact set of auditory human information. If information is coded via the spectrogram, i.e. the information content is frequency over time, then I believe that singing, which contains both long sequences and high frequency content, can serve as the optimal compact set to convey that information. I think that the reason is that complex frequency conveys meaning, e.g. words, whereas the long temporal domain conveys sentiment, e.g. emotion. In human communication both are important and I believe it can be quantified.

One such suggested experiment is to have a questionnaire on the information conveyed via several types of communications, for example: "what did this person think?" "what did this person feel?" "what did this person try to convey to you?" The three conditions will be: (i) speaking; (ii) musical instrument and (iii) singing. Each condition will be of equal duration. My hypothesis is that the best answers will be with singing.

This type of experiments suggest that the optimal way to communicate is not talking, but rather singing. Can this be used in education? Wait for the next blogpost…

Saturday, March 15, 2014

Creating mythical creatures

Today's genetic manipulations and understanding of embryonic development is quite astounding. You can actually write the genetic code on your computer, send it over the web and get to your home/lab a vial with the DNA you've written. On the other end, our understanding of biological genetic code is increasing. Scientists know the exact codes of several thousands of proteins, and most of their functions.

A more difficult field is embryonic development, in which the genetic code unfolds into creating the beings that are then born. It is a delicate balance between the genetic code, transcription factors that regulate the production of proteins, differentiation of cells and local and global chemical gradients. Nevertheless, a lot is known about the sequence of organ developments and how to influence them.

The last piece of the puzzle is genetic manipulations and creation of chimeras, in which a gene sequence from one species is inserted into another. You probably all know of the glowing mouse, where genes from a glowing bacteria has been inserted to the mouse genome and it glows in the dark.

Putting it all together, I think it is time to start bringing mythical creatures into life. While dragons are still out of the question, minor variations can be made. The first such animal that is the easiest to create, from my understanding, is the unicorn. The only unique thing about it is a single horn in the forehead. It has been shown that grafting an organ is possible, but I'm talking about a genetic/embryonic manipulation, such that a unicorn will be born. I believe it should be made similar to the rhinoceros horn, since it can be made pointy and doesn't require a bony structure. Hence, the genetic manipulation should be not major. Furthermore, the structure of the horn is not dissimilar from that of the horse's hooves, hence there is no need to insert new proteins into the pool, only regulating its expression. I admit, the way there is not easy in the design, implementation and regulation, but… a unicorn.

Other mythical creatures are probably harder, although I'll bet that Cerberus should not be that hard. It's simply having a siamese-triplet with a single body. Directly controlling this, so as to make it reproducible is probably not easy, but if Harry Potter has one, why shouldn't we? Pegasus is altogether another problem, since while I believe it would not be too difficult to have a horse with wings, to have a flying horse is, to the best of my knowledge, against the laws of physics. The same is true with a fire-breathing dragon, but that is for another post…


Why do this? Except for scientific curiosity, engineering challenge and pure geekiness, the market of such creatures could be huge. Which zoo would pass an opportunity to show its unicorn? Think of the extra marketing a circus can have with a Cerberus at its gate. The opportunities are endless. The only thing remaining is the curios scientist/engineer/geek with enough money to start this crazy project. Good luck!

Monday, March 10, 2014

Laser Tombstone Preserver

If you walk in old graveyards, the thing you notice most is that the words on the tombstones are illegible. The older the tombstone, the more faint they are. I think this is a shame, since those names and inscriptions are history, and while they are "engraved in stone", even stone deteriorates. Other old buildings or monuments share the same fate and it should be rectified. Furthermore, even new tombstones will, someday, be etched from memory and recognition.
I propose a device that will maintain the etched inscriptions on stone monuments, such as tombstones. It is composed of three components, namely, scanner, recognizer and etcher. The scanner uses modern 3d laser scanning techniques to detect the current inscription on the tombstone. It generates a complete digital 3d scan. This is then passed to the recognizer, in an attempt to use state-of-the-art deciphering tools to reconstruct the inscription. One can use more sophisticated sources of information, such as cross-referencing GPS of the tombstone, with historical records of people from that area, to have a better chance of finding the correct information. Finally, the etcher, which can be either a powerful laser, or any stone-etching device, emphasize the faint inscriptions so that it is more readable and more apparent. To sum up, the device is a portable one, which is passed over the tombstone and re-etches the inscription.
This device, once available, creates a new job, namely, Tombstone Preserver, which is a person going around the country, from one graveyard to the next, and using the device to highlight inscriptions. Furthermore, the deciphered scripts can then be uploaded to a cloud-based database for research and historical records.

I believe this project can revive a lot of lost history and has great personal value to many people around the world. The technology is available, all is needed is a person to make it. Interested?

Saturday, March 1, 2014

Gestures in Chat rooms

Non-verbal communications, such as facial expressions and hand gestures, have a drastic effect on the understanding and social effect a conversation has. There are numerous research projects showing that from psychological and sociological perspectives. However, the current common form of communication is text, e.g. sms, chat rooms, WhatsApp, etc. These lack almost all forms of face-to-face communications, such as tone of voice, gestures and facial expressions. While people use txting, to some extent, to reduce the complexity of the communication, the medium can be enhanced if those are present. While video chats solve most of the problems, texting will not go away once video streaming becomes more accessible. There is the allure of not actually be seen on the other side.

I suggest a research project to investigate the incorporation of non-verbal communications into text-based media. The emoticons were obviously the first step, as they nicely replace facial expressions, where J substitute and smile on the face, while other more complex emoticons can replace others. Two axis of extensions are suggested, namely, including gestures and automatization of inclusions.

How to include gestures? I believe a new form of emoticons can be incorporated. It has been shown that gestures actually relate to the physical reality, where gesturing the word "all" encompasses a large space and gesturing "never mind" performs a discarding motion. I propose creating a hand-based animations for text-based media, very similar to complex emoticons. But now, instead of a face substituting facial expression, there will be hands substituting gestures. One can create many such gesturecons, to include all kinds of meaning. See http://en.wikipedia.org/wiki/List_of_gestures for more examples. The research is the applicability and usage of these gesturecons by chatters: will they use it? How much? In which situations? What are the favorite gesturcons?

The next extension to emoticons is the automatic inclusion of them. Nowadays, facial recognition hardware and software are readily available, e.g. Kinect. There are known algorithms to track the face and also recognize facial expressions such as a smile, a laugh and other expression. I suggest to integrate these automatic recognition into text-related media, such as Facebook, WhatsApp, etc. In other words, when someone sends you a funny picture, and you actually laugh, it will automatically detect it and send an LOL. Research questions: will people like it, or do they like to control their emoticons? Do people send more "fake" emoticons than real ones?

Gestures are also readily detected. Using Kinect or similar devices, there are already algorithms out there to detect gestures. However, there is a crux. When you're in a text-based medium, your hands are occupied typing and you can't really gesture anything. There are two approaches to this problem: the first is that when dictation will become prevalent, such that you speak to text, you can at the same time gesture to gesturecons. The second is creating a whole new field of "typing gestures", e.g. when you lift your hand in exasperation, a gesturecon will be apparent; when you knuckle your fingers, the appropriate gesturecon will be inserted, etc.

Obviously, there is much more to be done in this project, but that's the fun of it, isn't it?

Saturday, February 22, 2014

Guided Dreams

Recent sleep research has advanced our understanding on dreaming and processing during sleep (most of the data were heard during an inspiring talk by Bob Stickgold). For example, it has been shown that during sleep we not only remember things in the past day, but also continue processing it in many ways. It has been shown that motor skills learned during the day are further processed during sleep to actually enhance the performance after a good night sleep. The same has been shown for visual as well as other sensory memory tasks.
More importantly, it has been shown that insights also occur during dreams. While there have been anecdotes that famous scientists have made their discovery during dreaming, it has been shown in scientific methodological research that insights are gained during sleep. A problem, which can be solved by a "shortcut", was more frequently solved by this shortcut, after a good night sleep (and not just time passing). Somehow, the brain continues to process the information in sophisticated manners and find different representations and encoding of the data that not only improves memory, but also surfaces new insights.
The most striking thing though, in my opinion, is the research concerning guided dreaming. It has been shown in a scientific experiment that things that were linked via an extraneous sensory link, e.g. a scent or sound presented at the same time, were remembered better than objects that were not, only when that sensory link was presented at the appropriate time during sleep. For example, visual stimuli that were linked to scents were remembered after a night with the same scent.
I propose a practical tool for guided dreaming. While olfaction is currently not really available as a practical tool yet, I believe it can be used in the following (substitute "odor" whenever you see "sound"). The tool is a music player connected to your schedule. All during the day, for each of your specific activities, play a very specific kind of music that is distinct. For example, jazz for work, vocals for driving, hip-hop for play with the kids and techno with the wife. Then, before you go to sleep, decide how you want to spend your night: consolidate the time with the wife=> play (soft) techno; better remember your playing with the kids=>play hip-hop; solve a problem at work=>play jazz. As for the latter, you can obviously refine the method. If you have a busy day at work, but want to focus on one specific problem, hear jazz only when you're trying to solve that specific problem. Then, replaying jazz will enhance the chance of finding a solution.
I must caution that this method is based on an experiment, but is by no means sound-proof. It does not mean that suddenly your memory will be better, or that you'll only dream of what you want. However, based on these preliminary studies, there is a chance that your brain will work more on the specific issue that you guide it to.

If you're going to experiment on yourself, feel free to share your experience in this blog. I'm sure other people (mostly me) would love to hear about it. Good night.

Monday, February 10, 2014

Adorable Screens

We spend most of our waking hours staring at screens. They can be either computer screens, for those with a desk job, or smartphones, for ... everyone. While the contents inside the screen became much more likeable and nice-to-look-at, mostly thanks to Steve Jobs, RIP, the screens themselves haven't changed since their inception. They are still ugly rectangles, usually in horrible colors.

I suggest a novel product design project, wherein the shape, color and whole aesthetic of the screen itself be rethought. While I understand that the rectangular shape is mostly due to hardware and digital-screen configuration, I'm pretty sure that today's technology can produce other shapes.

Some examples of other possibilities, as opposed to the black rectangle I'm staring at as I'm writing these words:
-        An oval screen, with some wiggles on the top for hair, and small indentations on the side for ears, to mimic a cute face. This way, I'll maybe interact with my screen as with another person, and not as a device. It can add some illusion of sociability to the isolated screen-based life that some people have.
-        A changing-shape screen, mimicking waves or leaves in the breeze. While this seems as though it will distract the user, you'll be amazed at the habituation we experience to continuous stimuli. Adding some auditory context of ruffled leaves, and I bet anxiety levels when Windows prompts yet another error message will come drastically down.
-        A changing-color screen, where I don't mean the content of what's displaying IN the screen, but the surrounding platform, which most computer screens still have. Think how much nicer it will be that the color will either match your mood, or preferably will steer your mood to the desired one, e.g. light blue for calmer interaction, or red for more productive times.

There are probably dozens more options like these, where designers can go wild and keep in mind that the goal is to make screen-lookers happier.

Thursday, January 30, 2014

Super Personalized Search

Search engines, e.g. Google, have transformed to personalized search engines, i.e. they are learning your habits of search and can nicely predict what you are going to search, and not only what you are searching now. These algorithms usually depend on your previous search history and other smart things that can be learned from other people's searches.

I want to suggest to expand this to new dimensions (probably Google already does this, but I don't know of it J). The first rather trivial dimension is time. There is usually a temporal dynamics of the searches, i.e. if you search X there is larger probability that you'll search Y only within a narrow time window, e.g. one day. This statistics can be learned from other users. I've heard of research done on search patterns of medication and a time-lapse of search of side-effect as a means to detect side-effects via searches. In other words, if I search for drug X and a month later I search for "head-aches", if enough people repeat this search, perhaps X's side effect is a head-ache after a month. To conclude this dimension, the search should also take into consideration delays in search and narrow time windows of X-Y search correlations.

Another cute dimension to super personalize the search is outside events, e.g. weather. The search can correlate between your own personal attitudes toward weather phenomena, e.g. snow, heat-waves, etc. Thus, taken from other sites the weather at your own location, the search can be refined when you search for "coffee shops": if it's snowing it will direct you to a more hot-coffee oriented shop, whereas if it's hot, it will direct you to ice-coffee shops.

Other local and/or global events can also shape your personalized searches, as specific people react differently to events, e.g. election, holidays, etc. Correlating your search patterns on similar events and cross-referencing it with people within your search-cluster can further refine the search results when an event occur.
Finally, the search can be refined to your own current state of affairs. By accessing your own social media, e.g. Facebook, tweeter, it can refine the search for your own and your friends state. For example, if you just posted "This is not my day", your search can be refined to happier sites so as to lighten up your day. Another example is that if a friend of yours posted that she seeks something, your own searches can be refined so as to hint to a solution to your friend's problem, even though you searched for something else. This can augment both your social interaction media, as well as your search results.

Once along these lines, one can possibly think of other ways to super-personalize the search. I encourage you to think of them, implement them, open a start-up, be bought by Google and become a millionaire. A small (non-committing) comment in the blog will be appreciated.

Thursday, January 23, 2014

Biological Wind/Watermills

The source of all biological energy is the sun, through the photosynthesis process. Ultimately, life on earth is Solar. However, as we try to get more "sustainable energy", we work also with other types of energy, namely, wind. The sights of those huge wind turbines are awe-inspiring, and somewhat disturbing. Can we also build molecular wind turbines?

Recently, it has been done. Nanostructures in the form of wind turbines have been constructed. However, as opposed to the biological world, these wind turbines produce electricity. This is unhelpful for living organisms (other than ourselves).

Another puzzling question arises: life has the unique capacity to exploit the resources at its disposal. It has recently been discovered that there are bacteria that can utilize gamma radiation as a source of food. Why not wind and/or water currents?

The answer cannot be that it is mechanical, since the other way is abundant: biological energy, in the form of ATP molecules, move flagella, which are kinds of whirly hairs that drive cells, e.g. sperms, to move. So the conversion from ATP to movement is ubiquitous. Why not the opposite?

A biological-energy producing apparatus, like in the mitochondria, utilizes a pump that converts H+ gradient over membrane into ATP (using ATP-synthase protein). In other words, a source of energy in the mitochondrion creates a concentration gradient, which is then converted to ATP, which is the energy currency in biology.

I propose to create a novel energy-producing nano-structure, namely, the biological watermill. The structure should have the shape of a water mill that transforms water current (water since most of life happens in aqueous environment, but wind current can also be used for floating bacteria or plants) into a concentration gradient and then, with ATP-synthase, into ATP. This structure will thus create a completely novel and unique energy source for biology.

Furthermore, with the new DNA-folding (see previous post), one can probably create the structure from biological materials only, e.g. DNA, or try, with novel computer-aided protein-folding software, design the structure as a protein. This presents a unique, and somewhat troubling opportunity to introduce a DNA-coded windmill into biology. Inserting this into bacteria and/or plants can have drastic ramifications. Perhaps it can serve as a novel energy source in starving countries.

To conclude, I suggest inventing a current-driven (wind or water) biological structure that produces ATP. It opens a completely new type of energy for living organisms.