1) How do you define consciousness? I'm not asking for a definition here, by the way, I'm asking the general shape and size of a definition. 2) What is the progression from current research to something that fits within that definition? Few people dispute that octopuses are clever. They're definitely conscious. Are they "intelligent?" More specifically, let's talk about an octopus' chromatophores. It's likely that an octopus doesn't "think" the camouflage patterns they use, but that the skin of an octopus has much of the "processing" involved "offloaded" onto the skin itself. Every AI trick out there right now is not analogous to an octopus hunting for prey, it's analogous to the octopus' skin - there's an input, there's an output, there's no intent. Intent comes from the octopus' brain, which is implemented, macro-style, by the skin. All animals above a certain level of complexity have autonomous systems that reduce cerebral load. Not only that, but all complex animals are in some way symbiotic colonies; we cannot function without our gut bacteria, for example, and evolutionary evidence suggests that mitochondria were, back in the ancient ancient past, external organisms that went native. Richard Wrangham argues that the human jump to sapience occurred because we externalized our digestive system and that fundamentally, humans cannot be human without a technological process to derive the nutrients we need from the environment. We use tools of increasing complexity. The better we get at using tools, the fewer fellow species we use - pack animals are rarely used outside of developing societies and non-meat meats are proliferating in developed food chains. Progress has generally meant the reduction of acceptable food species. So why be concerned about a hypothetical consciousness purpose-built to be used as a tool rather than the very real consciousnesses we consume every day? Dogs serve us without option or complaint; are they our slaves? What about horses? What about sheep? I've never seen the Hacker News posse wind themselves up over the actual enslavement of dogs, yet dogs are perfectly capable of surviving without us. Any hypothetically conscious AI would be more dependent on us than tropical fish, and there's no reason to suppose that a desire for freedom or autonomy is likely to spontaneously arise. So take a step back from the grandiose hand-waving cogito ergo sum of it all - what are the concrete steps between "hypothetical thought experiement" and "concrete ethical concern?"