> Banks don’t want private account details (like the user’s current balance and credit limits etc) being seen by anybody other than the account holder.
This is exactly the kind of justification they'd use. And surprise surprise there won't be a box that says "I'd rather risk somebody seeing my account details than have a biometric model of my face stored in your database and given to whoever you give it to".
Do people really consider the ethics of introducing tech like this? I guess it just comes down to a consequentialism vs deontology whether someone thinks this is a good idea. And, obviously, if they don't do it, I guess someone else will?
The great irony is that often technological innovations are believed to be liberating and done with the motivation of improving the common lot of humanity.
Two examples:
Social networking - Enabling more meaningful and greater connection between people, leading to greater happiness and fulfillment? Or, fostering more divisions, balkanization into "like" echo chambers, promulgating fear and prejudice, leading to greater depression and mass manipulation?
Cryptocurrencies - Disintermediation! need I say more ;) All kinds of utopian views of how this is supposed to promote freedom, security and efficiency. But top of the wish-list for the most paranoid and repressive authoritarian tyrant would have to be a magical way to fully control and monitor all economic activity: A cash-less society where every single transaction is done with the government controlled digital currency, recorded on the government controlled blockchain.
> Social networking - Enabling more meaningful and greater connection between people, leading to greater happiness and fulfillment?
I don't believe that this was a serious motivation for any of these companies for a second. That might have been in the press release, but most of these social networks were either born from Geocities-begets-Myspace incrementalism, or "watch these idiots give away their personal data" egocentrism.
An abridged survey of recent history:
Snapchat: it'd sure be easier to sext if these pictures disappeared
Instagram: it's easier to take a pic than write something
Twitter: pivot from failing product to internal tool
Facebook: privacy invasion as a service
Myspace 2.0: maybe we can make money off napster?
Myspace: Geocities clone
Geocities: AOL clone for the World Wide Web
AOL: Prodigy clone
Prodigy: Walled garden for selling Usenet access
Napster: OK, this one was probably the only one motivated to enable more meaningful and greater connections between people, leading to greater happiness and fulfillment
> Social networking - Enabling more meaningful and greater connection between people, leading to greater happiness and fulfillment?
That wasn't the motivation-- it was instead to make identities discoverable online. And-- with Facebook-- to make it easier for college students to get laid.
Decentralization is not essential to cryptocurrency as a technology. It's essential to a view of how it could be used for the betterment of many.
The technology can just as easily be applied for other purposes, mandated by an authoritarian government to be the only legal way to conduct commerce. You'd use it not because you want to, but because you want to avoid their ire.
I think we're mixing terminology. There's a difference between distributed and decentralized.
You can have a distributed blockchain, while still being controlled by a central authority (i.e. not decentralized). That's pretty much Ripple, and it still has many of the advantages of blockchain technology: distributed/quick settlement, highly secure, highly available, etc...
It would be actually possible to use a government's money in various new ways. Think of what could happen in the economy if the government provided flexible and free payment services.
(Which is exactly why they'll never do it, but just pointing out that a government-controlled blockchain for normal people would have a LOT of applications)
Actually my government already has rules on that. Using my debetcard at a store, withdrawing money from an ATM or transferring money between bankaccounts is free.
> Do people really consider the ethics of introducing tech like this?
But consider the fact that you are using terms out of the humanities, while most of my fellow CS students endlessly griped about how useless their humanities courses were and how they wished the school would get rid of that course requirement (six courses total over 4 years).
Until we teach the tech-priests of the 21st century the great responsibility due to the power that they hold, I doubt things will improve.
I'm not sure I agree the implicit assumption here that humanities courses necessarily result in more, well, humanity.
In the UK, for instance, we have no requirements for a university degree other than the courses relevant to that degree. I was there to study computer science, not get a rounded education at someone else's insistence.
And yet most of the really rapacious, conscienceless, anti-human stuff seems to me to come from the libertarian and 'bro' fringes of Silicon Valley, seemingly despite the requirements to study more 'humanities'.
> In the UK, for instance, we have no requirements for a university degree other than the courses relevant to that degree.
Which depends almost entirely on the university in question. My computer science degree (Cambridge) had two separate courses focusing on ethics/humanities. Year 1 had Professional Practice and Ethics (which starts as broad as "Ethical theory. Basic questions in ethics. Survey of ethical theories: [...]. Advantages and disadvantages of the two main theories: utilitarian and deontological.", not just as it relates to CS), and Year 2 had Economics and Law (broad introduction to micro/macroeconomics, and a general overview of the law as it related to CS). The course introduction for the latter notes that you are to treat it as if reading a humanties subject:
> One word of warning: many part 1b students may never have studied a humanties subject since GCSE. It is a different task from learning a programming language; it is not sufficient to acquire proficiency at a small core of manipulative techniques, and figure out the rest when needed. Breadth matters. You should spend at least half of the study time you allocate to this subject on general reading. There are many introductory texts on economics and on law; your college library is probably a good place to start.
FWIW, I don't recall many complaints about the presence of these courses. Most seemed to find it useful to get a more rounded view, and it was a nice change of pace from tens of hours of pure computer science a week). It was also likely helpful for my later studies in Law.
> Which depends almost entirely on the university in question.
This is also true in the US. I actually chose to study religion and philosophy in addition to CS. My reasons for doing so aside, I truly benefited from it as it helps guide the type of work I will take. I'm torn on whether it should be required, mainly because I am not in a position to decide what makes a 'better' software engineer.
I didn't doubt that; I just felt that the OP made it sound like doing a UK CS degree would mean only studying pure CS, which is overstating things somewhat so wanted to clarify. You aren't necessarily going to be picking classes from across the university to fulfill generic requirements (if nothing else, you don't really have the time to do so), but it's not hyperfocused either.
I would certainly consider courses in professional practice, law and ethics to be far more supportive and on topic than generalised requirements to take a certain number of "humanities" courses.
I agree. It seems to come from the silicon valley culture more than anything. After all Peter Thiel, one of the more extreme examples of this sort of thinking, got a philosophy bachelors degree iirc.
The ethics and actions that spring from libertarian thought certainly appear to be suited more to perfectly rational robots than to imperfect, fleshy and irrational humans, and how people actually live their lives.
There was an article about just that the other week, on here I believe. It was detailing how a lot of the controversial tech ideas coming out of places like silicon valley are from these tech/startup moguls who are so disconnected from reality that they don't understand the concept of ethics or technological consequence. To them it doesn't matter how intrusive a design is, if it solves a problem, then that justifies all negatives.
I'm willing to consider that they're all confused Futurama fans. They heard "technically correct...the best kind of correct!" and transmogrified it to "technically a good idea, the best kind of idea!"
> Like any technology, we need to consider the ethics of its application carefully so we don’t build tools that are open to abuse, or worst case, terminators that can travel through time to kill people.
So no, they took the opportunity to clear up any fears or concerns about the project and used it to make a really, really scary joke about robots specifically designed to identify individuals by using cameras and kill them with guns.
Nobody at this project gave one real thought about ethics. Ethically, you can't make that joke about your software, it's nauseating.
The terrifying thing is that this type of application "demo" is being built by smart people who are already fully aware of the _theoretical_ concept of ethical violations being enabled by software.
And yet refuse to connect the obvious dots to the dystopian, anti-human capabilities enabled by the tools they're building. "Oh yes well we didn't mean it for THAT".
WAY more respect (fear) if they came out and just honestly explained all the revenue generating capabilities this could extract from users. Better business. Less disingenuous.
I certainly feel MUCH safer knowing that my software could FORCE me to spend my attention on it for whatever reason. Bravo Machine-Box, really helping make the world a better place.
This is why software needs an association with some teeth, because professional ethical standards don’t grow on trees. Note how a doctor doesn’t just say, “Well if I don’t dose these people without their knowledge or consent, someone else will.”
"When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success." - Robert Oppenheimer
Good analogy. Oppenheimer was a young, brilliant, and ambitious scientist with the money and power to pursue a huge project. The famous Oppenheimer quote upon seeing the result of his work is "now I am become Death, the destroyer of worlds." I don't think modern tech-bros have his level of insight or introspection, but fortunately the stakes are a bit lower.
It's an ethics arbitrage. This kind of business looks for unethical technologies that other companies wouldn't dare to create, and then packages it up in a form that is palatable enough to bring to market at a profit.
I both pay my mortgage and keep ethics in the house. I'm not sure this company or dev was so desperate to make money that this seemed like the last hope.
For most people, the biometric model of their face is available to nearly anyone. Most people have multiple profile photos available on Facebook, Instagram, LinkedIn etc.
> For most people, the biometric model of their face is available to nearly anyone.
I was curious about this claim, so I did some cursory research.
A Pew survey from 2013 [0]--which is perhaps a bit dated--found that 66% of respondents believed a photo of them existed online--which, notably, says nothing about access to the photo or metadata that relates the photo to an identity. Pew found in 2016 [1] that about 68%, 28%, and 25% of US adults, respectively, have a Facebook, Instagram, and LinkedIn account. These are presumably the main vectors for accessing the face:ID pair. So this gives us some ability to quantify the first part of your claim, "most people".
As for the second part of your claim--that this subgroup's face:ID is available to nearly anyone--I did not find data on who or how many people might have access to this information. The vector here is important, though. Let's consider the public UI, which is realistically the only interface most interested entities have access to. With only a name and no other queryable bits of information, finding the matching face is unlikely because of how many identical names there are. The ability to query other bits like geolocation, work history, the social graph, and, of course, the face itself should greatly increase the chance of finding face:ID, which is minimally an account with profile picture. The profile picture is not per se sufficient to extract a face model but also not per se necessary, as other face photos might be viewable in the profile. At this point I can really only speculate about the intersection of privacy settings and photos, but I think it's far from clear that this information is "available", which I take to mean that the information is accessible with relatively little effort and means. And again, this is just the people who have a social media account and probably a photo tied to their account, not the population at large.
Of course, there are entities which have access to far more data than this, but that is not "nearly anyone".
How careful have you been about that? Do you just mean not uploading your own profile photo to sites?
What about any birthday parties or group pictures you might be in? Photos for a work event or with friends?
Unless you've been very careful not to appear in any public photos of any kind, especially ones taken by your family and friends (as it is easily traced to you), then I would expect you have just as much of an online photo-profile identity as anyone else.
If you've been that careful, then props to you. Good job.
> I’ve searched for a picture of me, and have only ever found a grainy one from two decades ago. That is it.
But that's not what I mean.
Facebook knows who you are. Maybe you can't find it yourself using search terms, but that's not the point. You have a Facebook profile with an email and a set of pictures and a social web connected, I'm sure, even if you've never signed up. There's just a db flag set that says 'waiting for this person to sign up'.
That could be, it I’ve never uploaded a picture of me anywhere, and I avoid having my picture taken. I’ve never had a social media account with my real identity either, and never had an FB profile, period.
Those are all good steps, and I do want to say, I think our society would be in a better position if more people were concerned about this as you are. I am sorry to be saying these downer things. I don't mean to be defeatist or tell you that there's no hope. It's just that the situation is really that dire.
Phone cameras are high res. If you live in a city or go out in public often in busy places, then the sheer number of people taking selfies and photos is immense and makes it likely that over time, each of us is caught repeatedly in these photos.
With Facebook et. al's newfound face recognition and social scale, people who have never heard of a computer or phone or Facebook can now be automatically identified and tracked throughout the real world just by your face in other people's social photos.
I realize this dystopia might not accurate portray your life, but it is also meant as a comment for others. Privacy is not an individual choice any more. Our social networks have forced a change in expected privacy and there's little that we can do right now to change that, as the profit motives for the corporations like Facebook are aligned this way.
Note: Yes, being in public has an element of privacy. It is reasonable to expect that if you buy some groceries at a store in Atlanta, and the next week walk to Central Park in NYC, that a company in San Francisco who you have no relationship with would not know about it. But that expectation is now gone, and already it seems wild that we could have ever had it. That is a kind of privacy that is lost forever.
Damn, I'd never realised things had gotten that bad. I willingly share photos of myself online but that's my choice. It's sad that even privacy nuts can't hide themselves anymore.
"Now, the modern person is determined by data exhaust—an invisible anthropocentric ether of ones and zeros that is a product of our digitally monitored age."
I won't doubt that in 15-20 years, using the process of deduction of your phone/laptop/browsing habits/credit card usage/address info a company could, if bothered, collate that data to get "your picture". And I mean literally zoom in a security cam to snap the photo.
You don't have a smartphone? You don't have a Google account? You don't have accounts on any platform (other than HN, obviously)? I think it's incredibly understated how easy it is to piece together who someone is from their data. You may not think you have pictures out there either but I'd venture to guess that something is out there.
My google accounts are in no way linked to a real identity. I have little doubt that my identity would be easy to deduce, but it won’t come with a picture, I hope. I’m not engaged in any secret or illegal activities, I just don’t want to give away my privacy, so I’m just cautious rather than appropriately paranoid. I do have a smartphone, and I have no illusions that law enforcement and governments can track me. The average person, and hopefully Facebook cannot.
Now that this is possible, what stops bad people/companies/governments from (ab)using this?
There's really nothing else you can do apart from wearing a mask and sunglasses, which will also be bypassed soon enough. (Not to mention that no one wants to do that.)
Unlike weapons, a lot of these ai tools have some force of good behind them as well - so good governments won't pass laws against this either.
I think that is highly unlikely. If lots of people are not okay with watching ads, there would be economic incentives for people to create ad-free alternatives. Even now we have lots of such alternatives. If someone is not able to afford that alternative, it may actually be harmful to advertisers to spend their money on showing these people ads anyway.
> economic incentives for people to create ad-free alternatives
Yes, but they will have to fight a giant up-hill battle because ads exploit a quirk of human psychology, which is that most people strongly undervalue their own attention.
I think it's hard to consider at this moment, but I'm sure this is something they'll at least investigate and consider. The ad industry is cutthroat, and if this gave them anyone an edge, you can bet all of them will follow suit in time.
There's something about software like this that really creeps me out, even though I realize it's not ridiculously advanced (i.e. it's way more common than I think) anymore. That might just be a personal aversion, though. I can imagine useful scenarios for this even if I get a bit of an icky feeling from it.
My co-founder and I have talked about things like this as an "anti-cheating" measure (we developed a take-home assessment platform), but it always feels way too overboard and invasive for an exaggerated problem (and I'm just against it in pretty much every way imaginable).
Interestingly this somehow feels better than overt measures like ProctorU, but that's an emotional reaction and not a logical one. In some ways it's probably much worse.
> Rather than one proctor sitting at the head of a physical classroom and roaming the aisles every once in a while, remote proctors peer into a student's home, seize control of her computer, and stare at her face for the duration of a test, reading her body language for signs of impropriety.
That article is from 2013, I wonder how much of this is now partially automated (i.e. relying on human remote proctors)?
In my experience ProctorU is not automated. They use a branded version of logmeinrescue that allows ProctorU to take full control of your PC and record from your webcam. You can also see the proctor. However, the proctor's camera is usually paused while taking the exam and they proctor more than one exam at a time. ProctorU also executes custom scripts on your PC to check for suspicious software and virtualization.
Source: As a remote student, I have used ProctorU several times. Most recently within the last few months. I use a dedicated PC for this proctoring.
Online proctoring is actually a really busy space in edtech. There are numerous companies with products deployed that are fully automated. They record videos of students taking the tests through a webcam, then send the analysis back to the instructors highlighting which videos are worth watching.
I never got the point of that. You can use a hdmi/dvi splitter to allow your accomplice to see your screen, and you can use your monitor's PIP function to allow your accomplice to send messages to you. both of which are totally undetectable to the student's computer.
had no idea about this, thanks for the link. Its kind of difficult to put into words how crazy this is to me without hyperbole or cliche. Very much feeling of through the looking glass.
Yeah, this whole thing seems a bit silly. You can't trust the webcam to be real or even functional.
And even if you could trust the entire website-to-webcam path end-to-end, you can't trust the image the hardware is reading. There's a reason that other face recognition systems like Windows Hello require that you have an IR camera, so that it knows it's not just looking at a photograph of a person.
It's just a cheap/easy biometric. Retina pattern or finger print are better biometrics, but generally require special purpose hardware for ease of use.
So what stops me from creating a device that registers itself as a webcam natively, but just puts a loop of a pre-recorded video that satisfies the face recognition software?
Stop trying to find solutions to problems that aren't real.
If you can be prosecuted for storing someone's diagnostic medical images improperly under HIPAA law, this seems like a VERY risky thing for a company to implement.
The policy might have been changed? An article I just read says: "taking a photo via webcam, uploading a photo of a picture id issued by the government, and making a record of their typing pattern."
> Banks don’t want private account details (like the user’s current balance and credit limits etc) being seen by anybody other than the account holder.
Unless it's an in-person interaction, a face has little security value, because it's not a secret. Getting a photo, or even full motion video of someone often just requires finding their Instagram page.
I think this is really about having powerful machine learning tools like face recognition, image recognition, content personalization and recommendation etc. in the browser.
Technologically speaking: Yeah, that's a nice feature.
Real world: This will be awful, I really don't want any DB to have a photo of me associated with transaction or authentification. Yes, I do have profile pictures, but allowing a service to get a "stream" of your face will be way worse, and I cannot imagine what would happen if this DB get compromised...anyway...still a nice Black Mirror episode though..
I can see from a privacy standpoint, this may cause some concern.
However, from a credit card processor point of view and combating "friendly fraud". This could be an excellent tool to prevent that.
For example. The scenario where a transaction has been processed and 6 weeks later, it is disputed because the card holder doesn't recognise the transaction. Perhaps the wife used the husbands card whilst he was in the shower, for [insert candy crush clone].
A capture of the users face would definitely help the merchant win the representment against Visa/Mastercard.
In a scenario where goods are being shipped cross-border. Lets say from China to the US and it's for a large amount. Then this could be an extra step, where the data hasn't passed a certain threshold and thus further information is required. Having a real-time snapshot and validation to prove the card holder is legitimate. Ensures the transaction goes through.
Ultimately, I do understand it's about weighing privacy concerns. But that doesn't mean some good can't come out from this.
Credit card processors and banks could do a lot to combat fraud that they don't do. Eg, realtime 2FA per transaction, which, as a bonus, could give the user a chance to categorize the purchase in their budget.
They do exactly the amount of security that they think is most profitable - balancing losses to fraud vs abandoned sales because of inconvenience.
Jumping straight to face recognition is a bit like physical security adding strip searches when they haven't yet bothered visually scanning for weapons.
> Banks don’t want private account details (like the user’s current balance and credit limits etc) being seen by anybody other than the account holder.
This is exactly the kind of justification they'd use. And surprise surprise there won't be a box that says "I'd rather risk somebody seeing my account details than have a biometric model of my face stored in your database and given to whoever you give it to".