Ethical issues connected with the use of natural language avatars
Natural Language Avatars (NLA)
There are several aspects that need to be considered with the ethics of natural language avatars, but first in order to understand these we first need to know what the avatars themselves are
- Natural Language Avatars are a subfield of artificial intelligence and linguistics
- This field’s purpose is to create a program ‘to simulate an intelligent conversation with one or more human users via auditory or textual methods.
The first real documented example of this was in 1966 and was called Eliza, which was designed to be a non-directed clinical psychologist.
- This system worked by essentially prompting the user/asking specific questions in order to make an analysis that fitted the symptoms of specific psychological afflictions.
- You can try this bot at
- It has advanced far from this state in the 1960’s of learning and experimenting into a new field to ‘chat bots’ which are currently used over the internet.
- Many are safe and were designed/created with no malicious purposes in mind;
- For example in 2003 AOL tested a new chat bot online which was designed to imitate a fictional film character (Austin Powers from the famous trilogy and can be found at
http://www.austinpowers.com/cgi-bin/austin/austinbot.cgi - The purpose of this was for people to interact with the character, and to enjoy the interaction and tell friends, essentially spreading the word of the new film.
Ethics of Natural Language Avatars
One issue of NLA’s is that people can forget they are interacting with a program and start believing the ‘character’ is real. Even when it is stated that the interaction is with a bot, the user can still forget.
- This leads to a relationship forming in the mind of the user, which can be seen as good and bad.
- Good;
- Some users may not be able to communicate well with others due to fear of rejection or of judgement, and so the forming of relationship with a ‘bot’ is good as it encourages communication on the basis that the bot will not judge, critique, or reject the user. This building up confidence of the user to again communicate with real people
- Bad;
- Forming of relationships could be because users may become dependant on the interaction with this ‘fake’ person, and dependency of this level is not healthy.
- There is also the problem of children using these bots and communicating with them.
- Children are still learning, growing, and developing and they learn and develop from studying their surroundings and interactions with others. If many of these interactions are with bots, then they will not have vast communication skills as there is only so much that can be learnt from interacting with a program as well as the child forgetting that it is not an actual person they are communicating with.
Another issue is that some chat bots are designed with e-commerce in mind, which may not be thought malicious, but when put in the context that it is using a system that is designed to imitate people, so users can be fooled (or forget) that they are talking to a program.
- This is called covert sales and the purpose is to encourage/imply purchases to the user.
- Again, this interaction is normally already stated as being with a bot, but when it is not stated, this is when the ethical issue of use arises.
- Also, when the bots are aimed at susceptible users that can be easily influenced such as children, this is also rises an ethical issue.
- For example, in 2002 a bot was circulating called ELLEgirl which targeted teen girls with IM Buddy. The program encouraged the user to buy the magazine, or subscribe to it, and when phrases such as "What can I wear with a camisole?" were asked to the bot in the chat program, the reply given is a link to the Ask ELLEgirl section of the site.
- More on this bot can be found at