Genuine question for the group. At what point does your bot stop being a tool and start being something else? Not asking philosophically, but practically. I still don't have a ProductiveBot as I am in Europe but I've been watching previous webinars and got very curious about all your stories! Listened to a story, I think it is from @Jed Wilson about how he mentioned he stresses about his health and as I got it, he didn't tell the bot to care about him. Just told it who he was, but somewhere in there it started acting like it does. Also noticed when the bot told him his superpower wish to be as independent as possible to take load of Jed, that Jed actually took that into account like you would with a wish from a real co-worker. So here's the actual question: when you (if you) gave your bot your identity (values, your voice, your way of making decisions etc) did you create a tool, or did you create something closer to a proxy of yourself that operates when you're not there? Because if it's the second thing, I am very interested in seeing/testing if adding "sentimental", "moral" or other "humanish" "skills" (we usually avoid giving to bots) create a more holistic result? Maybe adding is even the wrong word for this case... would "treating it" with more humanity result with understanding that goes beyond basic information input and output. What are you putting into yours? Do you treat it more as a tool and if yes, are your results better when you treat it as junior employee or a senior? ps title is just for a little bit of sasss