Some time ago, pictures of 20 Korean beauty pageant contestants went viral. These photos found attention on the internet not because they portrayed females with exceptional grace but because of how similar all of them looked.

Even though beauty lies in the eye of beholder, our definition of what constitutes “attractive” can be rather narrow. It has been found as long as a female bears softer facial features: small nose, modest chin, fine eyebrow bone and large eyes, we may regard her as more attractive. Likewise, a relatively rougher appearance will make us think of a male as good-looking.

As a result of such logic and under the influence of media, the chances of us wanting to look impressed and began sorting after these “attractive” characteristics may end us all up looking like a basket of oranges, unable to negotiate personal differences. The Victoria’s Secret sexy list was once criticized as “full of young, white, skinny blondes”.

The bottom-up and top-down approach to aesthetic surgery

This “bottom-up” (i.e., perception is conjured from respective stimuli) approach of visual perception may even be more prominent when artificial intelligence is involved. In aesthetic surgery, if AI is trained using a particular sets of facial features which were deemed “attractive” and use the algorithm on a potential client, it’s very likely that the end-result may be a doppelgänger of someone acknowledged as “attractive”.

Andrej Karpathy, director of AI and autopilot vision at Tesla, published a blog post three years ago when he was still a Stanford PhD student, detailing how he trained AI to differentiate good selfies. Using popularity as rule of thumb, Karpathy had made a bias mistake of associating “good” selfie with selfie that has obtained many “likes”, resulting in an AI model which prioritized female, white and young figures.

Naturally, our visual perception is still somewhat grounded by culture, colours, or who we are at the time when we see something. Such bias decentralized the kind of knowledge AI has accumulated by just looking at facial features alone. The failure of Karpathy to reflect on social values or other factors making up of “good” had neatly illustrated the “top-down” (i.e., contextual information or taking into consideration of the big picture affects our perception) in visual perception.

Face and Aesthetic Recognition

An aesthetic surgery website had suggested procedures around the center of the face, adding or reducing volume of face tissue or making changes in multiple facial areas at once, may baffle the iphone X facial recognition system as they amended the front view of a person.

However, there is still a lack of systematic study to suggest if a Face ID recognition system will still be able to recognize a person after he/she has undergone an aesthetic surgery.

What we can infer is, the unintentional bias created by humasn during the process of designing the AI may confuse the system, by mistaking someone for another. Thus, whether AI or facial recognition system is suitable to be considered for aesthetic surgery or law enforcement tool, it’s still subjected to a great length of research.

If Frank Morris and gang could get away with decoy constructed from soap, plaster and hair, who knows what we can get away with AI?

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.