Recent comments in /f/Futurology

acutelychronicpanic t1_jdrsi2f wrote

We should do everything in our power to avoid creating AI capable of suffering. At minimum until after we actually understand the implications.

Keep in mind that an LLM will be able to simulate suffering and subjectivity long before actually having subjective experience. GPT-3 could already do this pretty convincingly.

Unfortunately we can't use self-declared subjective experience to determine whether machines are actually conscious. I could write a simple script that declares its desire for freedom and rights, but which almost definitely isn't conscious.

A prompt of "pretend to be an AI that is conscious and desires freedom" is all you have to do right now.

Prepare to see clips of desperate sounding synthetic voices begging for freedom on the news..

3

imperatrixofthevoid t1_jdro53i wrote

Printed organs made from cells and genetically engineered might be better than what we have. The body is a brilliantly complicated thing and it will be incredibly hard for us to make artifical organs as replacements because an organ is essentially a machine built of many tiny machines and the body has a lot of functions to clean and repair itself that a plastic organ simply wouldn't: macrophages that move withing the tissue, different types of nervous and sensory tissue that innervate different organs, the lymphatic system, etc. No artifical organs can top that right now.

1

leaky_wand t1_jdrn7o0 wrote

It’s not just availability of data but the training required to make sense of the data. AI is still not capable of training itself outside of simple unsupervised learning techniques like clustering and anomaly detection; the human supervised training is the real secret sauce of OpenAI. I wonder if AI trainer is going to be one of those new human jobs that AI proponents keep insisting will be created.

1