시가 총액
24시간 볼륨
5704
암호화폐
54.82%
Bitcoin 공유

OpenAI’s most recent “reasoning” model in basic errors

OpenAI’s most recent “reasoning” model in basic errors


Cryptopolitan
2024-12-06 20:55:27

OpenAI recently released a “reasoning” AI model – o1, but it is already rocked with basic errors, according to the company’s new ad. The ChatGPT maker released what it has termed its most advanced model to date for paying subscribers kicking off its “ 12 Days of OpenAI ” event, which is a series of releases to celebrate the holiday season. An OpenAI demo video suggests the model is prone to errors According to a video OpenAI released to show the model’s strengths, a user uploads an image of a wooden birdhouse and then asks the model for some advice on how to build a similar one. The “reasoning” model appears to “think” for a moment before giving out what appears to be a set of instructions. Upon a closer look , the instructions appear to be a waste of time. The model measures the required material for the task such as paint, glue, and sealant but it only provides dimensions for the front panel of the birdhouse. The model suggests cutting a piece of sandpaper to another set of dimensions that are not necessary. Additionally, the model states that it is giving “the exact dimensions” but gives no exact dimensions, contrary to its earlier claims. James Filus, the director of the Institute of Carpenters, a UK-based trade body also exposed the model’s errors, such as tools that are needed but missing from o1’s list for instance a hammer. “You would know just as much about building the birdhouse from the image as you would the text, which kind of defeats the whole purpose of the AI tool.” Filus. Additionally, Filus also revealed that the cost of building the same birdhouse would be “nowhere near” the $20 to $50 that the model estimated. The OpenAI model does the opposite of the intended use The o1 case adds to other examples of AI models’ product demos where they do the opposite of the intended purpose. In 2023, a Google advert for an AI-assisted search tool wrongfully indicated that the James Webb telescope had made a discovery it had not. This mistake resulted in the company’s stock price dropping. This was not all from the search engine giant as recently an updated version of a similar Google tool told users that it was safe to eat rocks . It also claimed that users could use glue to stick cheese to their pizza. Despite the mistakes, the o1 model, according to the public benchmarks, remains OpenAI’s most capable model to date. It also takes a different approach from ChatGPT when answering questions. According to Time, o1 is still a very advanced next-word predictor as it was trained using machine learning on billions of words of text from the internet and beyond. The model uses a technique known as “chain of thought” reasoning to “think” about an answer for a moment behind the scenes and gives its answer only after that. This is different from just giving out words in response to a prompt. This helps the model give more accurate responses as opposed to a case where it just spits out words in response to user queries. Land a High-Paying Web3 Job in 90 Days: The Ultimate Roadmap


script type="text/javascript"> atOptions = { 'key' : '2a29386f0570b10dd6817f8b71218348', 'format' : 'iframe', 'height' : 250, 'width' : 300, 'params' : {} }; document.write('');
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.