I think a huge problem for the debate on whether robots and AI are conscious in any meaningful sense is less about the capacity of the machine in question and much more about the fact that consciousness is a hard problem in itself. We simply lack a good enough definition of consciousness to make any meaningful tests for consciousness that are based on real theory. The general philosophical definition is that a conscious being has a subjective experience of the world. Or to quote the common question “is it like something to be an X?” Does it have an internal thought process, will, wants, and desires? Does it experience things subjectively? Does the robot experience something like pain when it falls off a platform? But how do you define pain? An amoeba will react negatively to a stimulus, and it will be attracted to others. But if it encounters water too hot for it and moves away is this a biological equivalent of machine learning, or is it pain? Keep in mind that amoebas have no CNS or brains, just a single cell. To my mind the amoeba could be doing either of these. It could be doing exactly like the robot falling off a platform. “This event is negative, avoid.” Or it could experience pain.