Do you know why the mars rover has “metal tires” instead of rubber ones?
When you think about it, “rubber” has all the superior features compared to metal as a tire, fewer vibrations, better grip etc, yet we don’t use it for tires of vehicles on other planetary bodies.
The reason is to prevent “forward contamination”. Rubber is composed of long organic molecules. If we use it today as tires for devices where we search life by simply searching organic molecules, we might end up creating contamination that would render all future missions related to searching organic molecules moot.
ChatGPT answering programming-related questions 60-70% of the time correctly, while being 100% of the time really convincing and confident. Stackoverflow is one of the main sources of really well-groomed information that ChatGPT learned “how” to do so.
Now people using ChatGPT to answer questions on Stackoverflow creating a problem similar to “forward contamination”. People who have no idea how things are done just use it to generate “valid looking but wrong” responses and poisoning “the sources” which ChatGPT learned from.
What are we going to do when it comes to training the next AI? Are we going to cut-off from the time that ChatGPT is released because most of the data(don’t forget generating real answers require maybe thousands of times more effort) generated after this “epoch” is going to be less trustable?
Here’s a good example from Reddit. Someone thinks that the “halting problem” can be solved by just asking it to ChatGPT, while writing “People need to try this, I don’t have the knowledge to do this”.
Are we all ready for “confident and convincing” wrong/misinformation?