Source: Image generated by Midjourney AI – Viona M, Research Fellow DevCAFE
In 2022, the trend involving the use of various Artificial Intelligence (AI), tools such as Midjourney, LensaAI, and ChatGPT emerged. Many social media users, from ordinary internet users to celebrities, shared their experiences using these AI tools on their social media accounts. For example, we might often see posts about magic avatars generated by the app Lensa or people sharing their experiences with Midjourney for creating visual products. Additionally, in recent times ChatGPT for language, content generation and text-based performances has peaked in its popularity. This trend piqued our curiosity as a Tech4Good NGO. In this article, we will introduce these tools and examine if the hype matches the performance.
Getting to Know How AI Works
According to Haton (2006), AI is the design and development of computer systems that can solve complex problems that require human-like abilities such as vision and language understanding, using symbolic information and techniques that differ from traditional algorithmic methods. AI has applications in various domains, including theorem proving, natural language processing, speech recognition and understanding, image interpretation and vision, robotics, and expert systems. These applications involve the use of advanced techniques for understanding and processing data and often require the use of knowledge-based reasoning methods. AI has made significant progress in these areas, but there is still work to be done in order to enable AI systems to understand and interact with the human environment fully.
Furthermore, according to IBM, there are two main types of artificial intelligence: narrow and general. Narrow AI is designed to perform a specific task, such as playing a game or recognizing a face, while general AI is capable of learning and adapting to perform a wider range of tasks.
When exploring the world of AI, we may also encounter terms like machine learning and deep learning. These terms are often used synonymously, but it’s important to understand the differences between them. Deep learning and machine learning are both sub-fields of artificial intelligence, with deep learning being a sub-field of machine learning. The main difference between the two is in how each algorithm learns. Deep learning automates the feature extraction process and can process large data sets without much human intervention, while classical machine learning relies more on human intervention and structured data. Deep learning can use labeled datasets (supervised learning) or process unstructured data and determine the hierarchy of features that distinguish different categories of data. Machine learning, however, requires structured data and human intervention to process data, as also stated on IBM site.
AI has the potential to greatly impact human life, it can make a good change in healthcare, transportation, education, also other public sectors. Recently, AI technology has been applied to many industrial sectors to create self-driving cars, aid in medical diagnoses, enhance customer services, perform language tasks, also create artistic visual products. As AI advances, it is important to carefully consider responsible development and use of the technology. We will three AI apps/tools that are currently hyped.
Midjourney
Images are generated with Midjourney/Viona M
The images above are the results of an experiment conducted using Midjourney to generate images. All it takes is writing a text prompt about what you want to depict, and as a layperson, the results are quite impressive. Using Midjourney is easy, and it can generate images for you in a minute or less. In spite of that, We saw that Midjourney still has some issues to handle complex instructions and specific contexts, but it is sensitive enough to capture the big picture and other elements you want to include in the image. It also has the ability to understand a range of visual styles, making it helpful for individuals looking for creative ideas.
Based on the three examples of AI tools discussed in this article, Midjourney is the first one We tried. Before trying anything new, We usually do some research and read up on related information. Then, We found that there are pros and cons to these AI tools. We will discuss this further later on. For now, let’s get to know Midjourney first. According to Midjourney website, Midjourney is a research lab that focuses on exploring new ways of thinking and increasing the imaginative abilities of the human species. It is an independent, self-funded research lab that works on design, human infrastructure, and artificial intelligence. It generates images by text prompt.
Midjourney’s ability to produce an image is extraordinary. How could not, just by writing a text prompt along with your ideas on what you want Midjourney to create as an image, you can get it right away. In addition, the level of image quality produced is surprisingly good. Many people have shared their various experiences using Midjourney on social media, some of them generated images with midjourney to make memes, posters, digital artwork, etc. Pros and cons immediately appeared. Midjourney certainly has a significant impact on work efficiency in creating visual products but it also raises ethical concerns among artists and digital workers.
Jason M. Allen, the one who created artwork with Midjourney and won an art prize at the Colorado State Fair, became a trigger for conversation and debate among art workers about whether AI Art can be considered a work of art or not and the ability of AI to create works of art that also has the potential to replace human work.
According to the NYTimes article reported by Roose, the controversy surrounding the use of new technologies in art-making is not a new concept. In the past, some painters were resistant to the camera, seeing it as a threat to human artistic ability. Similarly, digital editing tools and computer-assisted design programs have faced criticism for not requiring enough skill from the humans using them. Some believe that the current generation of A.I. tools, like DALL-E 2 and Midjourney, are different because they are able to create visually appealing art with minimal input from humans.
Apart from that, it is also said that these tools work by collecting millions of images from the web and using algorithms to identify patterns and relationships in the images, which they then use to generate new ones in the same style. This means that artists who share their work online actually may be contributing to the development of their algorithmic rivals, as also stated in NYTimes.
Heikkilä also reported a similar thing in an article published by MIT Technology Review, it is said that these open-source programs, which collect images from the internet without seeking permission or providing proper attribution to the original artists, are raising ethical and copyright concerns. Artists such as Rutkowski have expressed frustration with these issues. Rutkowski’s name has reportedly been used as a prompt for image generation by Stable Diffusion about 93,000 times, according to the website Lexica, which tracks over 10 million images and prompts generated by the program. Rutkowski’s name is also frequently used as a prompt by another text-to-image generator called Midjourney. There are a number of factors contributing to Rutkowski’s popularity as an AI prompt. Many of his artworks have been taken from ArtStation, a website where many artists share their online portfolios. Rutkowski also stated that he does not bear any resentment towards those who use his name as a source of inspiration, calling it “a cool experiment” for them. However, he also expressed concern that it could potentially pose a threat to the careers of himself and other artists.
So, it can be concluded that AI image generators such as Midjourney have the potential to facilitate work in creating visual products for humans. However, the emergence of Midjourney with the ability to produce stunning images is also not free from problems related to ethical and copyright issues.
LensaAI
Lensa is a photo and video editing app developed by Prisma Labs. It saw a significant increase in worldwide downloads after the release of its “magic avatars” feature in November. According to The Washington Post article reported by Tatum Hunter, Lensa uses a machine learning model called Stable Diffusion, which has been trained on a large number of image-and-text combinations collected from the internet, to generate new images in the likeness of the user. When a user uploads their photos to the app, the images are sent to the cloud and an individualized machine learning model is activated to generate new images that seem accurate. Impressive, right? However, the widespread use of this AI app has also raised ethical concerns as well as raised questions about technology inclusivity. There have been reports of issues with the accuracy of artificial intelligence (AI) portraits of people with dark skin and certain ethnicities, including some Asian users and those who wear hijabs. This may be due to a lack of representation of these groups in both AI engineering and the training images used. These inaccuracies could lead to problems with facial recognition in law enforcement and potentially contribute to discrimination.
Of course, this makes us more aware of how the development of AI technology is not inclusive of black people as well as other people of color. It is necessary to realize that this exclusiveness occurs not only in terms of the results of services used by users but from the very first of the development itself.
With regard to LensaAI, which generates images based on photos uploaded by users, this AI app sparks various concerns regarding ethical concerns about data usage. Many have questioned whether the photos uploaded by users are then used to train their machine-learning model with no consent.
Furthermore, as reported in The Washington Post, Prisma Labs’ app, Lensa, does not share any data or insights from users’ photos with third parties, though its privacy policy allows for this possibility. The company also said that it only uses the photos provided by users to generate avatars and deletes the photos and machine learning model trained from those images after the process is complete. Prisma Labs’ CEO also stated that the company is not using the photos or individualized models to train a facial recognition network, but did not specify whether the company stores any data based on user photos. However, he said that the company keeps only the “bare minimum” of data. So, it can be concluded that the development and the use of artificial intelligence like Lensa still has raised a number of ethical concerns.
Since We prefer not to share our personal photos, We chose not to try the Lensa app. Nevertheless, if you’re interested in using this app, We suggest reading this Wired article with tips from experts before giving it a try.
ChatGPT
ChatGPT is an OpenAI project designed to make it feel like you are having a natural, human-like conversation when talking with an AI. When starting to use ChatGPT, we might see a disclaimer appears warning users that the technology may not always be accurate and may generate incorrect or biased information and offensive content, as stated by Pranshu Verma and Rachel Lerman in The Washington Post article titled “What is ChatGPT, the viral social media AI?”. It also mentions that the model is not meant to provide advice. ChatGPT is able to produce responses that may seem reasonable but could contain errors, which has raised concerns about the trustworthiness of this type of AI without human supervision due to the potential for errors, nonsensical output, or false information. Moreover, some users have noted that the chatbot can exhibit racial and sexual biases and may also get basic facts wrong.
Like Midjourney and Lensa, ChatGPT also raises questions and concerns about ethical issues regarding AI. So, without thinking twice, we tried an experiment by chatting directly with ChatGPT concerning this topic. Here are the answers:
Can you tell me more about yourself?
“I’m Assistant, a large language model trained by OpenAI. I was designed to be able to answer questions and engage in conversations about a wide range of topics. I don’t have personal experiences or feelings, but I can provide information and perspectives on many things. Is there anything specific you’d like to know?”
How does a large language model work?
“Language models are trained to predict the next word in a sequence of words, given the words that come before it. To do this, they learn the statistical patterns that occur in large datasets of text. During training, the model is presented with a large number of input sequences, and its job is to predict the next word in each sequence. The model is then evaluated on how well it is able to do this, and the weights in the model are adjusted to minimize the error.
The size of a language model is determined by the number of parameters it has, which is a measure of the complexity of the model and its ability to capture patterns in the data. Larger models have more parameters and are generally able to learn more complex patterns, but they also require more data and computational resources to train.”
Where does the dataset used to train Chatgpt come from?
“I’m sorry, but I don’t have information on the specific dataset that was used to train the chatGPT model. chatGPT is a variant of the GPT-3 model, which was trained on a dataset of web pages, books, and other texts. The GPT-3 model was developed by OpenAI and is one of the largest and most advanced language models currently available. It has been trained on a dataset that includes a wide variety of texts, including books, articles, and websites, and has the ability to perform a variety of language tasks, including translation, summarization, and question answering.”
As chatGPT is a variant of the GPT3 model, that was trained on a dataset of web pages, books, and other texts. it might lead to some ethical concerns. So, please tell me more about the ethics of ChatGPT?
“There are many ethical considerations that can arise when developing and using large language models like ChatGPT. One concern is the potential for a biased or offensive output. Language models are trained on data from the real world, and if the data includes biased or offensive language, the model may produce biased or offensive output. This can be particularly problematic if the model is used in a sensitive or high-stakes context, such as in automated customer service or content moderation.
Another concern is the potential for language models to be used to produce fake or misleading content. Language models can generate human-like text that is difficult to distinguish from real text, and this can be used to create fake news or other misinformation.
There are also privacy concerns related to the use of language models. The data used to train these models can include sensitive personal information, and there is a risk that this information could be accessed or misused.
It is important for developers and users of language models to consider these ethical issues and take steps to address them. This can include measures such as evaluating the model’s output for bias and offensive language, being transparent about the data used to train the model, and implementing strong privacy controls to protect sensitive personal information.”
It’s impressive to know ChatGPT’s ability to answer such questions. However, it is also clear that its responses are still quite stiff, as is typical of robots. It is also undeniable that ChatGPT’s capabilities are impressive, just like how ChatGPT is able to process responses or evaluations given to it in the chat. After some explorations with ChatGPT, I saw that ChatGPT also has some issues in addressing complex instructions and specific contexts. However, ChatGPT can be a valuable resource for generating ideas and finding inspiration, similar to Midjourney. In addition to this, the dialogue above shows that ChatGPT, just like other AI tools, also raises ethical questions about data usage in the development and use of AI. Despite its limitations, ChatGPT has shown a strong ability to perform various language tasks such as translation, summarization, and answering questions. Its access to a wide range of texts, including books, articles, and websites, suggests that it has a good understanding of things. However, it is important to note that ChatGPT and other AI tools have the potential to cause harm if not properly regulated, as they can sometimes produce incorrect or biased information or malicious instructions. Overall, while AI development has the potential to bring significant benefits, it is crucial to carefully consider the risks and regulate its use appropriately. The context in which these tools are used also matter. Read our founder’s blog in the American Evaluation Association article for more on developing tools with the context in mind.
Further Reflections
The development and use of artificial intelligence raises a number of concerns, including the need to regulate AI properly, non-inclusive AI development, and the potential for displacement of human work.
- The Need to Regulate AI Properly
Based on the previous explanation, it can be seen that there is a concern in the need for the proper regulation of AI. As AI becomes more prevalent, it is important to ensure that it is developed and used ethically and responsibly. This may include establishing guidelines for the development and use of AI that ensure no unethical use of data that may cause copyright issues. In other words, the development and use of AI need to be ethical, transparent, and beneficial to society.
- Non-Inclusive AI Development
The issue of non-inclusive AI development, such as the case of LensaAI previously mentioned, has increasingly highlighted the crisis of diversity in the AI industry. This is in line with research findings from AINow which also emphasize that there is a crisis of diversity in the AI industry and this should be taken seriously. Research conducted by AINow in 2019 found that only a small proportion of authors at leading AI conferences are women, and the majority of AI professors are men. This lack of diversity is particularly evident in tech companies, with women making up only 15% of AI research staff at Facebook and 10% at Google. Black representation is also low, with only 2.5% of Google’s workforce being black, and 4% at Facebook and Microsoft. So, the absence of diversity among AI researchers, including a lack of representation of women, gender minorities, black people, and other people of color, as well as people with different abilities, also contributes to significant biases in the development of AI.
Research Findings from AINow also stated that bias in AI systems can be a reflection of long-standing patterns of discrimination. Bias and discrimination in AI systems can have significant consequences for underrepresented groups, and these issues are particularly relevant because AI systems are increasingly being used in many areas. In addition to examining individual systems for bias, it is important to also consider the broader context of system design and the societal discrimination that can be reflected in data provenance. Ensuring diversity and inclusion in the AI workforce is crucial in addressing these issues.
- The Potential for Displacement of Jobs
Furthermore, as AI becomes more advanced, it may be able to perform tasks that were previously done by humans. This could lead to job displacement, as some workers may no longer be needed to perform certain tasks. Rumberger’s (1984) article “High Technology and Job Loss” illustrates how such fear and concerns about losing jobs to machines is nothing new, this has been happening since the beginning of the industrial revolution when machines started replacing the work previously done by workers.
Rumberger (1984) also points out that not only do a large number of jobs get lost due to machines, but the types of jobs that existed before the emergence of new technology also have the potential to disappear. The types of jobs that have the potential to be replaced are from physical to mental. The potential for physical jobs to be replaced by mental jobs was evident in the early years of microelectronics technology’s development, as seen in the increasing use of computers and other technological tools in the workplace. In 1984, when this article was published, this shift allowed for more tasks to be completed through mental rather than physical effort. This transition is still ongoing today, as technology continues to advance and change the nature of work in various industries. In today’s world, advances in artificial intelligence (AI) have made it possible for machines and computers to perform tasks that once required human intelligence. This means that certain types of labor, like the more complex mental or cognitive labor, may also be done by technology in the future. Although both Midjourney and ChatGPT have shown signs of it.
Technological progress often makes it seem like it’s easier for people to handle the tasks of their daily lives. While it’s true that technology can enhance the efficiency and effectiveness of work, leading to increased productivity, it also frequently raises concerns about the potential for job displacement by machines. This pattern shows that technology has not always been successful in benefiting everyone, especially workers. Brynjolfsson (in Rotman, 2013) points out that although the GDP is growing, the median income is not increasing at the same rate. He describes this as a paradox of our time. Despite high levels of productivity and innovation, there is a trend of declining median income and job availability. This is due to the rapid advancement of technology outpacing the development of necessary skills and adaptability of individuals and organizations. As a result, some people may feel left behind in this environment. This paradox illustrates systemic issues in the development of AI technology that does not consider the workers’ perspective.
Talking about systemic problems in the development of AI technology without a worker’s perspective, Polimpung’s article “Secanggih-Canggihnya Revolusi Teknologi Industri 4.0, Tetaplah Ia Primitif” presents an insightful analysis. According to Polimpung’s article, in a capitalist economic system that has no worker’s perspective, technological development, including AI, will always depend on a process of primitive accumulation. The process of primitive accumulation in relation to AI as meant by Polimpung is a series of processes of capital accumulation through the separation of workers from their labor conditions to the commodification of labor’s process of the workers themselves, in which the form of labor and its condition that are accumulated constitute as cognitive labor through datafication. It can also be viewed that this is happening to many current popular AI tools, such as Midjourney, Lensa, and ChatGPT, as is the case with the accumulation of creative labor through datafication in the creation of artwork, copywriting, and creative writing such as poetry and short stories for use in AI. This is also problematic because a lot of the data that is frequently used is often scraped from the internet without consent. Additionally, many workers do not benefit from technological developments because they are alienated from them. They are unable to fully participate in or reap the rewards of these advances.
Apart from that, it is also important to understand that everything that is meant above does not necessarily make technology an adversary to workers. The problem is not with the technology per se, but rather the way technology is being mastered in a capitalist manner and the way its development is being determined solely for only industrial profit purposes, as Polimpung also stated in the article.
The explanation above can serve as a reflection and a simple initial reading for understanding the problems related to the concerns of many parties about job displacements due to the increasing capabilities of artificial intelligence (AI). These concerns are understandable, as the use of AI in the workplace can bring about significant changes. Technology shouldn’t be seen as a threat, because automation and technology basically have the potential in freeing up human workers to focus on more complex and higher-skilled tasks, allowing us to use our unique skills and abilities for self and community development and enjoy the conveniences of life. However, it is essential to keep in mind that the potential of this technology can only be achieved with a record that the development of technology has a worker’s perspective, is inclusive and non-discriminatory, non-colonizing, ethical, transparent, and responsible.
Conclusion
To summarize, these AI tools that are currently in demand may struggle with handling complex instructions and specific contexts. Despite this, they can still be helpful in providing ideas and inspiration. It is also important to recognize that the development and use of AI involve multiple complex and layered issues, from the lack of perspective of workers to ethical concerns. Therefore, it is essential to approach the development and use of AI in a comprehensive and systematic manner, considering its potential benefits and impacts on society.
References
Haton, J. P. (2006). A brief introduction to artificial intelligence. IFAC Proceedings Volumes, 39(4), 8-16. https://doi.org/10.3182/20060522-3-FR-2904.00003
Heikkilä, M. (2022, September 16). This artist is dominating AI-generated art. And he’s not happy about it.. MIT Technology Review. https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/
Hunter, T. (2022, December 08). AI selfies — and their critics — are taking the internet by storm. The Washington Post. https://www.washingtonpost.com/technology/2022/12/08/lensa-ai-portraits/
IBM Cloud Education. What is Artificial Intelligence (AI)?. IBM. https://www.ibm.com/uk-en/cloud/learn/what-is-artificial-intelligence#toc-deep-learn-md_Q_Of3
Midjourney. About Midjourney. https://midjourney.com/home/?callbackUrl=%2Fapp%2F
Polimpung, H. Y. (2018, April 23). Secanggih-Canggihnya Revolusi Teknologi Industri 4.0, Tetaplah Ia Primitif. Indoprogress. https://indoprogress.com/2018/04/secanggih-canggihnya-revolusi-teknologi-industri-4-0-tetaplah-ia-primitif/
Rogers, R. (2022, December 09). What You Should Know Before Using the Lensa AI App. Wired. https://www.wired.com/story/lensa-ai-magic-avatars-security-tips/
Roose, K. (2022, September 02). An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy. The New York Times. https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html
Rotman, D. (2013, June 12). How Technology Is Destroying Jobs. MIT Technology Review. https://www.technologyreview.com/2013/06/12/178008/how-technology-is-destroying-jobs/
Rumberger, R. W. (1984). High Technology and Job Loss. Technology in Society, 6(4), 263-284. https://doi.org/10.1016/0160-791X(84)90022-8.
Verma, P. & Lerman, R. (2022, December 06). What is ChatGPT, the viral social media AI?. The Washington Post. https://www.washingtonpost.com/technology/2022/12/06/what-is-chatgpt-ai/
West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute. Retrieved from https://ainowinstitute.org/discriminatingsystems.html.
Note: This article was written by Viona M/ Junior Research Fellow – R4D Unit of the Development CAFE, the views are author’s own and does not necessarily reflect The Development CAFE’s official position on these tools, nor an endorsement, rather only as an information piece.