![]() ![]() Ultimately, however, it’s hard to see how we could agree on, create, and enforce any kind of global moratorium on certain aspects of AI development. ![]() The problem the open letter references is human-competitive intelligence systems represented by large language models (LLMs) like ChatGPT. That said, the AI research Kurzweil said will be harmed in the fields of health, education, and renewable energy are not ones the Future of Life open letter is most worried about. “We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.” ![]() The open letter states the dangers clearly: The “bad guys” won’t stop, and the competition racing to catch up with OpenAI won’t either. At one level, Kurzweil is right: a declaration to pause AI research won’t be adhered to by all. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |