Gary @Zamchick, co-founder of WordsEye, visited this morning. I had seen an early prototype of WordsEye a few years ago, and his current version is amazing. Here’s a video of Gary offering a demo for one of my awesome colleagues, Kindergarten teacher, Joyce Tsang…
As per the email WordsEye sent me upon registering:
WordsEye lets anyone “type a picture” using simple language. It uses natural language technology to translate your sentences into 3D scenes. Words can become art, visual opinion, greetings, and more.
Below is an example of text and the resulting scene included in the same registration email:
@WordsEye is an amazing two-fold web-based application. You can “type a picture” using simple and descriptive language to create an elaborate 3D scene. There’s also a social network component where you can share your creation to the WordsEye gallery, and download or re-mix someone else’s scene. When you explore the WordsEye Gallery, you can also click an image to see exactly the text used to create particular 3D scenes. I loved this aspect, and it reminded me of how you can “see inside” Scratch programs shared online in order to learn from the original creator and also remix the project to make it personal.
As a literacy tool, WordsEye is amazing for reinforcing the importance of descriptive and figurative language. You can change the scene easily by introducing or replacing words. I imagine having students build a lexicon of language that works in WordsEye – so they can help each other determine how the words tiny, humongous, large, small, huge, etc. will change the look and size of an object. In that respect, there are opportunities to have conversations about scale and proportion as well. Besides space and distance, WordsEye also recognizes pronouns — you can type “The dog is two feet from the sofa. It is to the left of the planet.” and WordsEye will place objects accordingly.
I hope one day WordsEye will be voice-activated, so that younger students can dictate words rather than type them. Also, I wonder if more emotions could be coded into WordsEye so that you can type “the sad boy” or “the happy alien” or “the frustrated teacher” (haha). Consider a doctor’s non-verbal chart of smiley faces to help illustrate a patient’s pain — maybe something similar will enable users to include layers of emotion or other non-verbals that can enhance the finished scene or offer insight into something they are not comfortable voicing aloud yet are ready to share in a visual medium.