The latest viral trend to hit the internet is the #FaceAppChallenge and with it has come a flurry of debate about data privacy and hidden terms and conditions. For those less familiar with the #FaceAppChallenge - essentially, users can upload their image into FaceApp and using artificial intelligence the app can produce an image of what the user will look like in a few decades time. Celebrities (and everyone else) have been posting the images using the hashtag.
The trend has also sparked a lot of debate about the data privacy issues associated with the app and its terms and conditions. The interest in the privacy aspect of the trend seems to be just as much, if not more than, the interest in the trend itself (or perhaps that is just this privacy lawyer's dream). People have been asking (and demanding answers to) a myriad of data privacy related questions such as:
- How much access does FaceApp have to other photos in their phones?
- Can FaceApp use their image for commercial purposes?
- Where is the FaceApp server located?
- Where is the data transferred?
There has been extensive press coverage (and a Twitter storm) about these issues and FaceApp had to release a statement to clarify the issues that had been raised.
The coverage of these data privacy issues shows that consumer confidence in how a company uses and processes their personal data is becoming increasingly important. Data privacy issues are grabbing headlines. Consumers are more aware of these issues and while they may not (yet) stop them from seeing if they're destined to look like their parents, businesses must appreciate that their reputations can hinge on their transparency in dealing with personal data.
But just because the app makes people look older, it clearly doesn't make them wiser. The sudden surge of popularity of the app raises concerns about people's privacy - and what this AI could do with an immense archive of people's faces.