Background Checks Now Include Twitter, Facebook
Employers Are Searching Social Media For Reasons Not to Hire You
June 24, 2011 -- Surviving a background check just got tougher. And it's soon going to get harder still, as Internet search technology grows more sophisticated: Employers have started scouring the web—social networking sites in particular—to check up on potential hires.
If you've ever posted anything that suggests you might be somebody who likes a racist joke, drinks too much booze or maybe is a bit too fond of guns—these all can be grounds for an employer telling you, "Thanks, but no."
And it's all perfectly legal. The Federal Trade Commission has just given the okay for Social Intelligence Corp. to sell these reports to employers and the file will last for seven long years.
But suppose you're clean as a whistle with your online use of social network sites. It's still possible that among your Facebook friends, unbeknownst to you, there's someone with a criminal record. An employer could turn you down for having iffy friends and not run afoul of any employment discrimination law.
"You can be deemed a bad apple by association," says Pam Dixon, executive director of the World Privacy Forum. "Are all your friends gay, rich, poor? Do they all live in California or New York or Kansas? What are your hobbies? Do they look expensive or entail high risk?" If so, Dixon warns, your chances of landing that dream job, depending on your would-be employer's predilections, may go poof. The employer's decision not to hire you may be ethically outrageous. But it's not illegal.
"It's kind of scary," says Tena Friery, research director for the Privacy Rights Clearinghouse. "We know social networking sites can be hacked—that someone can post something under someone else's name. What happens if somebody wants to sabotage a job applicant? And would the potential employee even know it was taking place?"
Likely not, says Michael Fertik, founder and CEO of Reputation.com, which provides products and services that a job seeker (or any other user of the Internet) can use to minimize the impact of false or inaccurate information posted about them. It's not the present, says Fertik, that job seekers should fear; it's the future.
Right now only one company—Social Intelligence in Santa Barbara, Calif.—specializes in conducting Internet background checks that are compliant with the Fair Credit Reporting Act (FCRA). The Act regulates the collection, dissemination and use of consumer information. Where a search turns up evidence that might be used to deny an applicant credit (or a job), it requires that employers notify applicants they are in danger of being disqualified and state the evidence on which disqualification would be based. The applicant then has five days to dispute the finding.
Fertik describes Social Intelligence as "not that interesting a tool," since it relies on human analysts reviewing what's been dug up on the Internet to verify, say, that the guy in the Klu Klux Klan outfit really is the same guy being considered for V.P. of Human Resources.
Fertik calls Social Intelligence's searches "just a preview of what's to come, when all this will be done by machine. Then it will get a lot more dangerous"--from the candidate's point of view.
Background checks of tomorrow, he predicts, will use far more sophisticated technology—facial recognition software, for example, that would obviate the need for a human being to vet the Klansman photo. The person being scrutinized will never know his data spoor is being tracked, nor will he know what doors opened for him or which closed because of his social network postings. Consumers, he says, will gravitate to services like Reputation.com because merely changing one's Facebook settings to their highest privacy level won't be sufficient to give you the protection you'll need.
The founder and CEO of Social Intelligence, Max Drucker, maintains that his company's use of human analysts is a strong plus: at present, people are more adept than software at taking into account such nuances as the context of a social media posting. If the company's proprietary software finds an incriminating image, say, the human reviewer may decide it's harmless because of some contextual explanation. For example: "Look at this embarrassing costume my boss insisted I wear to the office Halloween party."
The report given to an employer includes only the kind of information an employer legally can use to evaluate a candidate, as determined by the Equal Employment Opportunity Commission (EEOC). "We redact anything that isn't permitted," explains Drucker. Nor does the search contain only negatives. It also provides positive criteria: any charitable or volunteer work the candidate may have done, any awards they may have won, or anything else that suggests they have displayed leadership in their field.
Employers, he points out, face a dilemma: If they conduct their own search of a candidate by, say, doing a Google search of his or her name, they expose themselves to one kind of liability: The possibility that they will see types of information that legally cannot be used in the decision-making process. By seeing that the candidate is Catholic, they open the door to a discrimination suit on the grounds of religion if they don't hire him. If, however, they fail to perform a background check, and if because of their failure to perform due diligence, they hire somebody with homicidal tendencies who one day kills his co-workers, they're liable for their failure to have performed due diligence.
Why do people post stuff that they must know will only get them into trouble some day? "Why do people get tattoos of swastikas or drive drunk? No one knows why they make stupid decisions. They just do. It's like wearing a T-shirt to a job interview: You're creating a public persona that you're presenting to the world."
And that world, more than ever, includes potential employers.