South by Southwest SXSW2016 celebrated 30 years old from March 11 to March 20. The festival (film, music, interactive) took over the center of Austin in a city whose motto is “Keep Austin Weird”. Strate was there to be inspired and to support the beautiful Living Joconde of Florent Aziosmanoff in the Tradeshow event. There is a day by day selection of our ♥ ❤ ❥ and interests. Among the main themes AI and robotics, virtual reality was everywhere.
Day 1 (March 11): Big Data and AI, Failure:Lab, Humanising Tech Via the Power of Voice
Big Data and AI: Sci-Fi vs Everyday Applications panel discussed several issues on advancements in AI and big data. The commun sense and the missing element in AI is about trust: we can trust Rumba to vacuum but are we going to trust it be a nanny? One needs image recognition algorithms, the other to be reliable safe. Regarding speed and data and deep learning (IBM Watson for more info check day 3): we need not only to see the result but also to know the Why? when one takes a decision (common sense for humans). We need predictions that are accurate but also that can explain, systems that are partners – Tell & Explain. Confirmation bias: humans believe in stories, AI don’t have this bias. “We believe that our decisions are great”, but “cold decisions” are important in certain situations.
Discovery of the concept of Failure:Lab —“an international movement that showcases the struggles behind success. The events feature stories from successful people, providing the context and backstory behind their rise”, and lots of talks on failing, vulnerability, fears at SXSW.
HAL to Her: Humanizing Tech Via the Power of Voice panel from speech recognition to personal assistants. The products today: Amazon Echo, Cortana, Siri, Ok Google (wow, all female voices!), XFINITY voice remote control for TV, Onyx wearable walkie-talkie for your phone. In the future no need to open apps, just talk. “People don’t want to read the manual” (Google), the research is “not only on what you say but also on how you say it” (Microsoft). Discussion on the Google’s AlphaGo AI: it is the first type of AI – the goals are clear, but we need a second type of AI in which we create the rules, the goals and we might also change them in the meanwhile. This is the path between voice recognition and conversational agents.
Day 2 (March 12): Five counterintuitive truths about habits, One robot doesn’t fit all, Emojicons.
Five Counterintuitive Truths About Habits by Gretchen Rubin writer on happiness. Very clear talk on habits and strategies to discover habits that work for each person. She talks about abstainers and moderates, healthy and unhealthy treats (give yourself a little something and stay on commande), habit vs. finish line (if one makes something to hit a goal, that person can not make a habit, habits work with milestones),the importance of inner calm that comes from little outer order, and finally a framework for 4 types of personalities that can help us to determine what habits-type work for us: upholders, questioners, obligers, rebels. Well, you get what you get, you don’t get upset. To find out more, G. Rubin has lots of podcasts: here.
One Robot Doesn’t Fit All with Wendy Ju and colleagues is one of the best panels with interesting discussion on: favourites robots (Mo from Wall-E the little robot that likes to clean everything, Amazon Echo – even if it is not a robot it shows that social robots are here and it does our work for us, Guy Hoffman robots with soul – they are really incredible!!!). So again to the basics: what is a robot ? It has to move (so Echo, not a robot?), it has to have a sensed planned act. A super difficult question: how to make robots that people accept? Express a goal and cope with failure – if as a robot you can’t do a job at least show that it cares. Last but not least, social aspects with robotics (an adolescent trash barrel robot) in a super interesting experiment here.
The Linguistic Secrets Found in Billions of Emoji – or why I can’t write without drawing smileys 😀 > very exciting and interesting talk on what emoticons are from the cultural point of view and patterns coming from data. Emoticons are not a language, but more a style for short communication. However something happened recently: the Oxford Dictionaries Word of the Year is a pictograph: 😂, officially called the ‘Face with Tears of Joy’ emoji. You can find (almost all) presentation slides here.
Day 3 (March 13): Hehe this is another exciting day with Rodney Brooks, Hiroshi Ishiguro on Androids and Future Life, Watson + opening TradeShow!
Rodney Brooks (creator of Rumba) in Conversation with Nick Thompson (editor The New York Times) – beautiful talk on robotics, for those who want to see the talk, voila ici. A short summary: the future of autonomy robots lies in their dexterity (its about mechanism, materials, algorithm and something else, but Rodney forgets… maybe design and user testing ?). Regarding the jobs that the robots will take from people: yes, robots will do dull and repetitive stuff and people will do the more complex stuff. Regarding the letter written by Bill Gates, Elon Musk and Stephen Hawking, Brooks says that all questions are more complex than simple answers and that today in talking about robotics there is no rationality (300 000car accidents/day and zero autonomous car accidents/day but 300 000 < 1 autonomous car accident ?). Talking about Deep Learning, “we don’t know the mechanism of intelligence”. Regarding the opportunities and moralities in the field of robotics: 1. if you want to become rich work on Elderly people in the next 10 to 25 years, 2. if you want to be moral, use robotics to clean the planet, 3.For the future, do robotics to send to Mars to build habitations.
Hiroshi Ishiguro on Androids and Future Life – impressive talk and demonstrations with Geminoid. “Pepper is here?, said Prof. Ishiguro just before his presentation, Pepper is a nice robot, but my robots are better”. The robots that accompanied Ishiguro entered his suitcase except the head of Geminoid that he took it with him in the plane (“it was fragile and people know me at the airport”). We are entering the robot society in 3 to 5 years. In Japan the rice cooker speaks and personal computers are now transforming into personal robots. Pepper robot is good, but a little big for the small Japanese house. Saying that, Ishiguro presented two small robots that are in a non-stop conversation. A person can enter the conversation, but the robots can not understand what the person says, but they fake as if they do. A good scenario for Japan is to learn English. Talking about androids, he underlines the idea of beauty and the fact that he is creating the most beautiful android ever created. In Japan we find androids in theatres, and even on TV. Finally he also shows a third robot, Hugvie that has a neutral presence of the human + a telephone > it seems to be extremely powerful and stress reducer as a communication device for Elderly and children. More of his robots here.
IBM Watson: Information, Insight and Inspiration – a talk about what Watson is, how it works and its potential for different industries today. Watson can eat 800 million pages of text/second and globally is learning from everything on the web; it has a cognitive motor that it permits to understand, reason and learn. The first version of the system became convinces that dogs = people, because people talk to dogs as if they are people. Cognitive system rely on collections of data and information. Watson also learned that there isn’t a ground truth, only “pockets of truth”. Jeopardy it’s Watson’s hardware (image here). When Watson answers a question, the question does not exist in a data basis, but he understands the question from all the analysed text. Today Watson is divided in multiple APIs (open and free of use!!) like speech recognition, Speech to Text, Language detection etc. Pepper robot is also using some APIs from Watson and there are also applications related to oncology (treatment decisions to improve care), and other proposing to invent a new food recipe – chef Watson. Presentation slides here.
And TradeShow started also that day. The Living Joconde was present among the other French start-ups on the La French Tech exposition stand.
Day 4 (March 14) and Day5 (March 15): no need for introduction for JJ Abrams, but also a visit to IBM Cognitive studio, Sony FutureLab, one last good talk, From Cartoons to Apps: UI Animation in 2450!
The Eyes of Robots and Murderers – talking a little bit on robots, but more what is a story and the importance of good stories. Technology gives today lots of possibilities to tell something, but not all things that come into videos are stories (a new app was presented – demo live during the conversation – Knowme). It also presented never-before-seen footage from one of his upcoming television series, Westworld (it has robots and they are going crazy for a reason that we don’t know). JJ Abrams said that we “feel about these characters even if they are robots”.
From Cartoons to Apps: UI Animation in 2450! – inspiring talk about how to design character and personality into UI. It’s about bringing in a little thing that bring charms and produce a smile. It’s not about efficiency and getting the job done, it’s about how to do the job (exemple with the loading bar that proposes something funny while waiting vs. showing a progression). these aspects are also interesting to explore in robotics and designing characters for objects in 2450 will give birth to new designers specialisations among character writer, animator, sound directors, creative Tech, VUI designer, UX designer: product liability insurance designer, lead AI psychologist designer, robot education & advancement researcher, rapid speech pathology prototyper, spiritual guru designer.
That day also the visit of Sony FutureLab: a new sound experience headphone was shown and visit of IBM Cognitive studio where different demos show the power of Watson (Pepper, Chef Watson, games, super VR, …).
That’s all for now, all the photos here!