Two months ago at the Watson Developers Conference, IBM released TJBot & mdash; can assemble a DIY cardboard combo as shown above. Can not because of silly Meng, humble appearance of it, it is essentially based on Watson's programmable AI. Developers around the world have created TJBot for laser cutting and 3D printing, including South Africa, Kenya, Italy, Germany, Switzerland, Pakistan, Canada and Hong Kong, before releasing the carton version. Lei Feng Network was informed that a number of developer groups expressed interest in working with IBM to design new scenarios for TJBot: from robotics / cognitive courses for teaching, to the care of the elderly, the development of conversation agent enterprise-class solutions.
IBM released how to make TJBot, how to program the instruction set (known as the "menu"), in theInstructablesThe online developer community is well received. The project is reflected in the main pages of the community and has received 21,000 user comments. TJBot is accepted by all levels of developers, from beginners to experts. We use it to create learning, reasoning, and the natural interaction with the cognitive object.
Simplify Design & mdash; & mdash; Developed for developers
TJBot project's original intention is to become a user experience and experimentation "figurative cognition" of the introductory material. "Representational cognition" refers to the AI technology has been implanted in it with the interaction with the equipment, objects, space. If this process is simple enough, what will the user create? Will produce what kind of design ideas? TJBot helps find answers to these questions. As we all can afford DIY tools, it "virtualized cognition" of the democratic process of innovation.
With the sensor and Watson to achieve a variety of functions:
As a prototype, TJBot's skill tree is constantly expanding: According to Lei Feng network know, it has been able to speak, carry outSpeech Recognition, Waving and dancing. These capabilities are realized by its built-in sensors, as well as a range of cognitive services. For example, TJBot's speech function from the Watson text to speech services in the software level to complete, and then broadcast through the speakers. Similarly, the Watson voice-to-text service, and TJBot's microphones, combine to enable speech recognition. These skills can be further combined to create new scenarios, such as creating a virtual agent or digital assistant.
New menu is added
Currently, TJBot's Github toolkit contains three basic "menus": Let TJBot respond to simple voice commands, analyze tweets and replies, and let TJBot act as a talk agent. Developers of the community by adding it for the two menus: TJWave and Swifty TJ. TJwave is interesting to show developers how to control TJBot's arm. It also includes additional features: let TJBot with music "dance": TJBot will play a music file, extract the beat and the peak, and then wave. TJBot's short hands can also be used to make voice interaction more vivid, and mimic human speech when the hand movements. Swifty TJ shows how to use Swift language to control LED on TJBot. As TJBot's menu directory expands, Swifty TJ provides a starting point for Swift developers to write code to TJBot.
The next step
In 2017, IBM will focus on three areas to enhance TJBot: development, maintenance and learning.
Development: IBM will continue to improve the existing menu, while exploring the new skills of cardboard robots. An example is: IBM has just used TJBot's camera sensor (with the relevant software applications) to achieve its visual recognition capabilities.
Maintenance: Develop and maintain the TJBot developer community, introduce TJBot to new users, share new menus, user challenges and feedback.
Learning: Perhaps this is the next most important thing. This includes a study of the developer experience, the end-user experience, to provide better design ideas and design guidance for cognitive applications.