Artificial Intelligence (A.I.) image generation has advanced exponentially over the last months, leading to worldwide use and recognition; however, that does not mean that these image generators are free from presenting any type of biases. Studies conducted across many different research groups found out that A.I. image generators presented high levels of gender and ethnic biases when prompted to generate pictures of different groups/professions, such as pictures of doctors, nurses, housekeepers, basketball players, students, among many others. This paper delves deeper into those studies, following the same model of text-to-image generation using Adobe Firefly. Furthermore, the results were then compared to the labor statistics found in the Zippia database, in an attempt to replicate and validate the results found on previous research papers. Through our experiments, we found out that even at an advanced stage, A.I. Image generation still portraits significant levels of racial and ethnic biases, sticking to outdated, inaccurate, and many times detrimental social misconceptions. Our studies contribute to a better understanding on how important it is to manage data input to machine learning in order to avoid biases as much as possible, in order to achieve more inclusive, accurate, and fair A.I. systems.