Maybe it gets revealed later, but I see a rather large assumption in the initial statement: "Why do people believe AI-enhanced interactive programs, like LLMs and AI-Assistant Agents, hold more conscious qualities than non-personified computer program interactions?"
For me, I don't see this "belief" to be highlighted or shown by anything other than rhetoric. I.e. you seem to be assuming that people hold this belief without having evidence it's a widely held belief. Perhaps that doesn't matter, but it feels like the intention is to explain this "widely held" and "common" belief with some just trust me bro™️ rhetoric to back up that initial claim.
Maybe my issue is with the "why" being put there before the "do" has been proven.
Maybe it gets revealed later, but I see a rather large assumption in the initial statement: "Why do people believe AI-enhanced interactive programs, like LLMs and AI-Assistant Agents, hold more conscious qualities than non-personified computer program interactions?"
For me, I don't see this "belief" to be highlighted or shown by anything other than rhetoric. I.e. you seem to be assuming that people hold this belief without having evidence it's a widely held belief. Perhaps that doesn't matter, but it feels like the intention is to explain this "widely held" and "common" belief with some just trust me bro™️ rhetoric to back up that initial claim.
Maybe my issue is with the "why" being put there before the "do" has been proven.