We come away from this with a simple philosophical precept that has existed for thousands of years: Just because it can be done, does not mean it should be done. Before we commit to any course, we have to ask fundamental questions regarding our humanity, our culture and our laws. For example, do we allow private ownership of AI forms that have the ability to wield or act as weapons? Do we limit or regulate the kinds of activities and functions these AI forms can engage in, such as work, parenting, sex and inventing their own languages? Do we limit the level and range of intelligence and adaptability an AI form may possess? Do we consider the AI form to have any rights, or a different kind of rights — and would this affect our own human rights and humanity as whole? What do we do when a company fails to self-regulate, and a situation like Facebook's is not succinctly concluded? Do we have a right to do anything at all?