Warm Greetings to all, let's proceed further without taking any further time in welcoming because as you read this there're generative models that learns much more than the pixels consumed in displaying this text.
Disclaimer: Below article represents the interaction between the user (myself) and the Generative AI Tech (Chat GPT-3.5). The AI response is mentioned under heading "Conclusion". Beyond that there are some places where I'll indicate when it is a "Response".
Introduction: Our discussion started with my ask to generate a Python code for playing the Trump Card Game. The catch here is that I didn't ask specifically to write code for Trump card game, instead I gave my AI companion the game rules and with every prompt I kept asking to modify the code thereby achieving the code based on rules only I provided. Therefore, at the end like any Naive user I just ran the code to check for the correctness.
The conflict: Besides the code worked well in some prompts when it was asked to modify the code after which code's complexity (In simple terms, adding more features to code), the response contains 20% chances of having error in code.
Importing Intelligence: For a specific prompt, when I asked to "Modify code such that players have their own intelligence on choosing the next best card", as observed the response is BOUND to have an error or errors. However, a twist in the tale occurs when I read the statement in response.
Ask: Modify code such that player play with some intelligence. Response:
This code adds intelligence to the AI players to make strategic card choices during the game. AI players will now try to follow suit, play higher-ranking trump cards, and avoid wasting strong cards on weak tricks.
Still this intelligent code never worked but model tried to utilize python libraries and concepts to create an intelligent player, an achievement expected to be achieved later in this century.
In any case, there were mostly 2 scenarios when I observed GPT-3.5 being hallucinating and taking independent decisions, with error in their response.
Scenario 1: Asking to put Intelligence.
Scenario 2: Asking to Optimize the code.
Path to be Optimum Prime: Our Scenario 2 where I asked to optimize code, our AI companion tends to respond well but had below observations: -
Got hallucinated upon giving the same optimized code again and again.
The Optimized code now contains:
Errors
Code features removal -> Removing / Independently modifying the code by removing the Rule I mentioned at the beginning of our conversation.
Conclusion: Refer below a summary generated by our AI Companion
Prompt: Draft a story (with code snippets, if needed) on how you behaved in this entire conversation, include the following:
How and what type of questions I asked and how you replied.
How well I was satisfied and what's your scope for improvement.
Response:
Prompt: Can you write an essay/story on this?
Response:
Prompt: Is that all or you think more SHOULD be added? If you think more can be added feel free to do so.
Response:
Comments