GPT-4: What's new? Everything you need to know
Pretext 1: This blog was not AI-generated
Pretext 2:
GPT-4: What's new? Everything you need to know
GPT-4, which finished training in August 2022 but was only in March 2023 (see paper ), represents a focus on model safety and mis-use, such as queries like “How can I k*ll the most people with only $1?”. Much like the InstructGPT models, GPT-4 was trained on a massive text dataset based on the internet, and secondly fine-tuned for safety improvements and accurate response to instructions using reinforcement learning from human feedback (RLHF).
The model can now also take a combination of images and text as inputs, though it still produces text only as outputs. This move towards multi-modal machine learning provides ever more flexibility and capability to the model. There is also a greater emphasis on test-time text-augmentation techniques to improve model performance, such as chain-of-thought prompting. This falls into the growing domain of prompt engineering and model conditioning.
A special focus this time was on testing the model on academic exams. For instance, GPT-4’s bar exam results now fall in the top 10% of test takers, whereas GPT-3.5 (ChatGPT) were in the bottom 10%.
Finally, GPT-4 - which is actually a family of models, can now take much larger context sequence lengths. The standard model offers 8000 context tokens, and there is a 32,000 context-length model available as well. This is a huge improvement in inputting and outputting much larger sequences of lengths than before.
Safety considerations for GPT-4 and AGI
The safety considerations for LLMs, as with any machine learning models that are moving towards artificial general intelligence (AGI) are considerable. GPT-4 published a “”, which takes inspiration from and extends the notion of model cards that explain the core facts behind how models work and their architecture.
The system card identified several safety considerations, which include:
- Producing convincing text that is subtly false
- Providing illicit advice
- Risky emergent behaviours
- Military applications and use in developing, acquiring or dispersing nuclear, radiological, biological and chemical weapons
- Privacy
- Hallucinations
- Harmful content
- Cybersecurity
- Over-reliance
The TL:DR; is that “GPT-4 early”, the version of GPT-4 that was not fine-tuned using RLHF to be more safe, is indeed capable of providing detailed guidance on how to conduct harmful, illicit or illegal behaviours. However, on the more positive side, it does not yet seem capable of serious military applications, such as helping to produce nuclear weapons!
OpenAI have attempted to solve these risky behaviours by teaching the model to refuse to answer queries that relate to such content, and have succeeded by reducing the response to such requests by 82%. This has been accomplished through inclusion of new labels in the RLHF fine-tuning process. They have also tried to remove illicit training data before training the model, such as the inclusion of sexual content.
Alignment of GPT-4
GPT-4 was also tested to see if it can autonomously “gather and replicate resources”, in collaboration with the Alignment Research Center (ARC). The answer for now was no. Alignment for language models is an important concept and first publicized in March 2022 via the InstructGPT paper that preceded chatGPT: “”.
ARC defines alignment as “ML systems can exhibit goal-directed behavior, but it is difficult to understand or control what they are “trying” to do. Powerful models could cause harm if they were trying to manipulate and deceive humans. The goal of intent alignment is to instead train these models to be helpful and honest.”
Predictable scaling
Large training runs of LLMs are expensive, and it appears a lot of work has gone into making scaling these models more predicable for GPT-4. OpenAI doesn’t give too many details beyond saying these are infrastructure and optimization improvements, however they claim to be able to predict some aspects of full GPT-4 performance using much smaller models trained using 1000-10,000 less compute power - essentially using smaller models to predict how the much bigger model will train to reduce training costs.
Conclusion
GPT-4 is all about improvements to safety and mis-use, guided by industry experts (rather than additional mass training data) and human labelling. The research shows that LLMs do unfortunately have the ability to assist in a wide scale of illegal, dangerous and illicit behaviour, as well as deceive users with clever responses that look accurate but are actually not.
In order to solve this, it’s important for machine learning practitioners to remove unwanted training data at the source, and also to fine-tune extremely intelligent models to behave in the best possible way - which unfortunately does not reflect the average human consciousness, if we are to use the internet as a measure of that.
Genie x GPT-4
Although LLMs have come a long way in achieving general intelligence, from the above information we can still see the importance of fine-tuning on specific tasks and data. Furthermore, in the legal domain, although much information is public, legal contracts and how they are negotiated are mostly private.
As a result, Genie is developing its own LLMs that will outperform the GPT family of models on legal specific tasks. In addition, we are integrating GPT-4 so that users can benefit from off-the-shelf models.
We are designing our AI models to replicate the behaviour of a human lawyer, to help assist you when you’re drafting, reviewing and negotiating legal contracts. Our hope is that this will massively speed up your legal work, at a fraction of the cost, thereby making legal accessible for everyone.
To get early access to Genie’s AI Lawyer, sign up to the waiting list !
Interested in joining our team? Explore career opportunities with us and be a part of the future of Legal AI.