Technical Fraternity : hazing in technical interviews

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Technical hazing in interviews (also referred to as white-boarding)
To all of you claiming knowledge of data structures is some kind of signal in interviews? Stop it.

Seriously.

It isn't reasonable to expect a candidate to derive that elegant solution it took two PhDs 8 months to discover.

Algorithm questions are not a way to make sure you’re hiring someone who is capable of coding. Algorithm questions are a way to discriminate against certain kinds of people.
I've participated in many, many interviews over my career. I've read everything Google has ever published about interviewing. I've surveyed the literature. It turns out when you task actual researchers with measuring the effectiveness of interview techniques you discover something: Noise does not become signal through repeated hazing rituals.
Google and Microsoft both abandoned their famous "brain teaser" questions. Why? Because the research demonstrated it is uncorrelated with job performance. Everything you think is great about your interview process? It's almost entirely luck and confirmation bias.

A master has failed more times than a novice has even tried.

I discovered that if I wrote code forty hours a week that I could learn programming. I could type code as easily as I type an email and the logic finally made sense. This was a revelation to me. I had given up on programming because I thought it was too hard and I could never learn it. I found that if I put in enough time and effort I could learn how to code.
I don't have a computer science degree.

I'm happy to have a discussion about the tradeoffs between red-black and AVL trees but that's a privilege I have from writing software since I was 10 years old.

I've implemented a copy-on-write immutable system using atomic compare-exchange to do concurrent in-memory transactions with rollback on 40+GB datasets. (I also added JavaScript to the same product to do in-proc formula calculations). My first program? A DOS program launcher written in QuickBasic. Not impressive in the least.

I'm proud of the 3-way visual diffing of blueprints I wrote for PlanGrid. I wasn't capable of writing that code the day I was hired. Every day is a process of learning new things. In two months I now know more about mach ports than most software engineers will ever know yet I've barely scratched the surface.

I've done such a wide variety of projects that I've run into situations where these data structures were useful. I investigated them, read the compsci papers, and tested them because they were relevant for the problem at hand. I also had the luxury of free time to study things like CPU branch prediction, read papers on implementing associated types by propagation of generic constraints, and written basic glsl shaders.

I was the same smart capable person before those experiences. If the door had been shut in my face I might not even be a software engineer today. The code I wrote in my first "real" programming job was garbage. I think I had only first discovered hash maps a year or two before that. Hell the code I wrote two years ago is garbage compared to what I'd write to solve the same problem today.

Software engineering is not some magic discipline immune from human factors. If you've never written a line of code before, go to Stanford. Graduate top of your class. It doesn't matter. Everything you make will be dog**** because you haven't written enough failures to truly understand software.

A general contractor can't build the Empire State building as their first project, no matter how good they are.

No one would give a **** about Frank Lloyd Wright's first building if he had died right after completing it.

When I read @RedQueenCoder's blog post... it really angried up my blood.

To all of you claiming knowledge of data structures is some kind of signal in interviews? Stop it.

Seriously.

It isn't reasonable to expect a candidate to derive that elegant solution it took two PhDs 8 months to discover.

Algorithm questions are not a way to make sure you’re hiring someone who is capable of coding. Algorithm questions are a way to discriminate against certain kinds of people.
I've participated in many, many interviews over my career. I've read everything Google has ever published about interviewing. I've surveyed the literature. It turns out when you task actual researchers with measuring the effectiveness of interview techniques you discover something: Noise does not become signal through repeated hazing rituals.

Google and Microsoft both abandoned their famous "brain teaser" questions. Why? Because the research demonstrated it is uncorrelated with job performance. Everything you think is great about your interview process? It's almost entirely luck and confirmation bias.

A master has failed more times than a novice has even tried.

I discovered that if I wrote code forty hours a week that I could learn programming. I could type code as easily as I type an email and the logic finally made sense. This was a revelation to me. I had given up on programming because I thought it was too hard and I could never learn it. I found that if I put in enough time and effort I could learn how to code.

I don't have a computer science degree.

I'm happy to have a discussion about the tradeoffs between red-black and AVL trees but that's a privilege I have from writing software since I was 10 years old.

I've implemented a copy-on-write immutable system using atomic compare-exchange to do concurrent in-memory transactions with rollback on 40+GB datasets. (I also added JavaScript to the same product to do in-proc formula calculations). My first program? A DOS program launcher written in QuickBasic. Not impressive in the least.

I'm proud of the 3-way visual diffing of blueprints I wrote for PlanGrid. I wasn't capable of writing that code the day I was hired. Every day is a process of learning new things. In two months I now know more about mach ports than most software engineers will ever know yet I've barely scratched the surface.

I've done such a wide variety of projects that I've run into situations where these data structures were useful. I investigated them, read the compsci papers, and tested them because they were relevant for the problem at hand. I also had the luxury of free time to study things like CPU branch prediction, read papers on implementing associated types by propagation of generic constraints, and written basic glsl shaders.

I was the same smart capable person before those experiences. If the door had been shut in my face I might not even be a software engineer today. The code I wrote in my first "real" programming job was garbage. I think I had only first discovered hash maps a year or two before that. Hell the code I wrote two years ago is garbage compared to what I'd write to solve the same problem today.

Software engineering is not some magic discipline immune from human factors. If you've never written a line of code before, go to Stanford. Graduate top of your class. It doesn't matter. Everything you make will be dog**** because you haven't written enough failures to truly understand software.

A general contractor can't build the Empire State building as their first project, no matter how good they are.

No one would give a **** about Frank Lloyd Wright's first building if he had died right after completing it.

You Get What You Measure

I can teach someone AVL trees in less than a day. We can whiteboard it together. Within a week they can implement it and the associated unit tests.

Know what else? They can learn the same thing from Cracking the Code Interview. In fact they can learn the whole book in a few weeks, maybe a few months. It's just rote memorization.
  • What does that say about their ability to build well-structured systems and avoid spaghetti code?
  • What does that say about their ability to collaborate on a team? To fluidly switch between leading and fighting for their ideas while being a follower on others' ideas? To create a safe group environment where all group members feel like they can ask "stupid" questions and everyone gets a chance to speak?
  • What does it say about their ability to "hit the ground running" and start building features right away?
  • What does it say about their ability to learn new frameworks, methodologies, and yes even data structures?
  • What does it say about their ability to interact with designers or clients? To imagine failure scenarios and make sure the code handles them? To write good tests?

Answer: Nothing. It just says they can read an algorithm and memorize it. They can learn which data structures to apply to which word problems.

If you’re struggling to make ends meet because you’re a single parent who can’t get past the velvet ropes to the land of coding opportunity, you do not have time to learn these things. You are told you’re not welcome and you give up.
Everyone starts somewhere.

Stop closing doors.

Stop using stupid hazing rituals.

Look for potential and build on it.

Most of your competitors are busy looking for pre-made diamonds (and failing even at that). Look for the diamonds in the rough. Pair them with experienced mentors and watch them grow.
Article link: http://www.russbishop.net
Related article: http://redqueencoder.com/the-algorithms-of-discrimination/
 
Last edited:

cguy

Executive Member
Joined
Jan 2, 2013
Messages
8,527
Your last post coding was about someone rating their interview experiences, where all the WB coding interviews were bad (these were also the ones where no offer was made), and the ones without WB coding were good (the ones where offers were made). This person has about 3 years of industry experience, refers to themselves as a "senior coder", and appears to have had 4 jobs in those 3 years. It sounds as though this actually makes a case for WB coding interviews, not against it.

This post (the original article, not Bishop's) is about someone who believes that algorithmic question discriminate unfairly (example used is implementing a linked list :wtf:). This person has 2 years of industry experience, and has had 3 jobs in those 2 years. This also suggests that those companies that passed based on algorithmic questions did the right thing and those that offered jobs without such questions were mistaken.

Both of these people started their own consulting companies shortly after their attempts to work in industry. Doing this so early into one's career isn't generally a good sign, although of course I hope it works out for them.

The author above is just railing against a straw man:
To all of you claiming knowledge of data structures is some kind of signal in interviews? Stop it.

Seriously.

It isn't reasonable to expect a candidate to derive that elegant solution it took two PhDs 8 months to discover.

No interviewer cares about "knowledge" in the sense of rote learned algorithms - well, if they do they're either part of a fast sinking ship, or will quickly be asked to stop giving interviews. I have participated in thousands of interviews, and have never seen anybody expect someone to know something by heart. It is so stupid, and such an obvious interview fallacy, I find it hard to believe that this is a legitimate problem with the industry. I have, however, heard plenty of people complain that they were expected to "know" something, when they were really just asked for a general approach, points to consider, or (interactive) algorithm derivation. These are usually people who cannot even begin to articulate how they would solve the problem, and quite frankly probably couldn't even understand what they were being asked (which is a red-flag worth knowing).

As for questions that took 2-PhDs 8 years to solve - while this is an obvious hyperbole, any interviewer worth their salt knows that asking a question that nobody can answer is completely pointless, since the purpose is to rank candidates, which can't be done if they're all just marked as "didn't know answer". Algorithmic questions are usually structured such that there are a range of solutions with varying degrees of sophistication, and various segues that capture different skill sets and abilities. Candidates aren't expected to just "get" optimal solutions - if they do, that's great, and a good (and rare) data point, but if not, the interviewer will typically give hints, or even answers with additional segues in order to understand how well the candidate understands the answers (far more important than getting anything correct). Additional points are typically given for candidates that give interesting out-of-the-box answers, over correct answers (hiring people who think exactly the same way isn't generally a scalable approach).

The rest of his post is really just a conflation of expected case scenarios with best case scenarios. Sorry, it doesn't behoove any employer to hire you over others with more supporting evidence because of what you may one day be.
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Your last post coding was about someone rating their interview experiences, where all the WB coding interviews were bad (these were also the ones where no offer was made), and the ones without WB coding were good (the ones where offers were made). This person has about 3 years of industry experience, refers to themselves as a "senior coder", and appears to have had 4 jobs in those 3 years. It sounds as though this actually makes a case for WB coding interviews, not against it.

This post (the original article, not Bishop's) is about someone who believes that algorithmic question discriminate unfairly (example used is implementing a linked list :wtf:). This person has 2 years of industry experience, and has had 3 jobs in those 2 years. This also suggests that those companies that passed based on algorithmic questions did the right thing and those that offered jobs without such questions were mistaken.

Both of these people started their own consulting companies shortly after their attempts to work in industry. Doing this so early into one's career isn't generally a good sign, although of course I hope it works out for them.

The author above is just railing against a straw man:

No interviewer cares about "knowledge" in the sense of rote learned algorithms - well, if they do they're either part of a fast sinking ship, or will quickly be asked to stop giving interviews. I have participated in thousands of interviews, and have never seen anybody expect someone to know something by heart. It is so stupid, and such an obvious interview fallacy, I find it hard to believe that this is a legitimate problem with the industry. I have, however, heard plenty of people complain that they were expected to "know" something, when they were really just asked for a general approach, points to consider, or (interactive) algorithm derivation. These are usually people who cannot even begin to articulate how they would solve the problem, and quite frankly probably couldn't even understand what they were being asked (which is a red-flag worth knowing).

As for questions that took 2-PhDs 8 years to solve - while this is an obvious hyperbole, any interviewer worth their salt knows that asking a question that nobody can answer is completely pointless, since the purpose is to rank candidates, which can't be done if they're all just marked as "didn't know answer". Algorithmic questions are usually structured such that there are a range of solutions with varying degrees of sophistication, and various segues that capture different skill sets and abilities. Candidates aren't expected to just "get" optimal solutions - if they do, that's great, and a good (and rare) data point, but if not, the interviewer will typically give hints, or even answers with additional segues in order to understand how well the candidate understands the answers (far more important than getting anything correct). Additional points are typically given for candidates that give interesting out-of-the-box answers, over correct answers (hiring people who think exactly the same way isn't generally a scalable approach).

The rest of his post is really just a conflation of expected case scenarios with best case scenarios. Sorry, it doesn't behoove any employer to hire you over others with more supporting evidence because of what you may one day be.
Ps. What you missed is that the PhD reference, referred to the actual algorithms i.e, the time it took two PhDs to develop these algorithms. I.e. whiteboarding a linked list is a learned skill (monkey see/monkey do) vs. the effort / ability to develop the concept. Also he stated 8 months not years (1955-1956)

The point which you've clearly missed is that asking someone to repeat this on a whiteboard is not going to aid in finding similarly talented individuals, but rather going to separate those who swotted algorithms prior to the interview from those that didn't & if that's all that they required -- then ask yourself why was it even necessary to stipulate that the job required an iOS developer; when clearly what they wanted a capability to regurgitate algorithms i.e. the difference between the theory you can swot for vs. the skills you only master through actual development.

Both individuals in this scenario are significantly skilled without a formal education. As for redqueencoder(the related link); she worked previously for www.sonoplot.com or more specifically for Brad Larson(http://www.sunsetlakesoftware.com/about) -- in what I would consider a highly technical field; As for the article I posted in this thread; Russ has recently (last month) been appointed by Apple to the Development Tools team; the team that develops compilers, IDEs and the like.

But yet again you're stuck with the preset notion that the problem is not with the interview technique but rather the particular individual. Sorry but you're simply wrong; an ability to regurgitate algorithms is a poor way to identify uniquely talented individuals. All you can expect from that type of process is to single out those who swotted algorithms prior to interview from those that didn't, or even worse, those who have practiced techniques to look good in hazing interviews.

Ps. As to your questioning the rationality and/or reality of these scenarios; Google or Twitter searches should easily confirm that this occurs too frequently to simply be dismissed as the inaccurate rants of a few unskilled individuals.
 

cguy

Executive Member
Joined
Jan 2, 2013
Messages
8,527
[)roi(];18503302 said:
Ps. What you missed is that the PhD reference, referred to the actual algorithms i.e, the time it took two PhDs to develop these algorithms. I.e. whiteboarding a linked list is a learned skill (monkey see/monkey do) vs. the effort / ability to develop the concept

:confused: I didn't miss that. Also, anyone who can't code a linked list really has no business calling themselves a programmer - it's a basic litmus test. Even calling it an "algorithm" is a stretch.

[)roi(];18503302 said:
The point which you've clearly missed is that asking someone to repeat this on a whiteboard is not going to aid in finding similarly talented individuals, but rather going to separate those who swotted algorithms prior to the interview from those that didn't & if that's all that they required --

The whole point of my post is that nobody actually asks people to regurgitate algorithms. Asking for a linked list is more likely to see how the candidate will implement the class, and if they have a basic grasp of the language and pointers/references. It's basically just another fizzbuzz test, before the actual interview starts.

[)roi(];18503302 said:
then ask yourself why was it even necessary to stipulate that the job required an iOS developer; when clearly what they wanted a capability to regurgitate algorithms i.e. the difference between the theory you can swot for vs. the skills you only master through actual development.

Er... because they want people who know how to code for iOS, and be able to develop the algorithms they need. Once again, I don't think anyone is actually expecting people to swot algorithms. If someone has to swot for a linked list, well...

[)roi(];18503302 said:
Both individuals in this scenario are significantly skilled without a formal education. As for redqueencoder(the related link); she worked previously for www.sonoplot.com or more specifically for Brad Larson(http://www.sunsetlakesoftware.com/about) -- in what I would consider a highly technical field;

redqueencoder worked there for 1 year and 1 month for her very first job, after which she spent 5 months and the next job and 6 months at the next. If you call that evidence of being "significantly skilled", well, let's just say that we have different standards.

[)roi(];18503302 said:
As for the article I posted in this thread; Russ has recently (last month) been appointed by Apple to the Development Tools team; the team that develops compilers, IDEs and the like.

Russ seems to be a very good developer - I never said he wasn't. I don't think he has a very good understanding of the interview process though.

[)roi(];18503302 said:
But yet again you're stuck with the preset notion that the problem is not with the interview technique but rather the particular individual. Sorry but you're simply wrong; an ability to regurgitate algorithms is a poor way to identify uniquely talented individuals. All you can expect from that type of process is to single out those who swotted algorithms prior to interview from those that didn't, or even worse, those who have practiced techniques to look good in hazing interviews.

You're arguing cross purposes, I don't disagree that regurgitating algorithms is a poor way to interview, the point is that it doesn't happens anywhere near as often as people like to believe. Equating "whiteboard coding" with regurgitating rote learned algorithms, is just stupid.

[)roi(];18503302 said:
Ps. As to your questioning the rationality and/or reality of these scenarios; Google or Twitter searches should easily confirm that this occurs too frequently to simply be dismissed as the inaccurate rants of a few unskilled individuals.

There is no way that the number of times it shows up on google or twitter could possibly be used to gauge the proportion of times that this actually happens.
 

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
:confused: I didn't miss that. Also, anyone who can't code a linked list really has no business calling themselves a programmer - it's a basic litmus test. Even calling it an "algorithm" is a stretch.

The whole point of my post is that nobody actually asks people to regurgitate algorithms. Asking for a linked list is more likely to see how the candidate will implement the class, and if they have a basic grasp of the language and pointers/references. It's basically just another fizzbuzz test, before the actual interview starts.
Now who is being silly; fizzbuzz vs. linked list -- awful choice for comparison.
Er... because they want people who know how to code for iOS, and be able to develop the algorithms they need. Once again, I don't think anyone is actually expecting people to swot algorithms. If someone has to swot for a linked list, well...
rubbish... give me one practical example where Linked Lists are useful in iOS applications. Oh and I guessed you missed the part where the interviewer wanted the answer whiteboarded in Java, and didn't accept C as an alternative because he only knew Java. Wonder how many iOS apps are built in Java... Care to offer another BS explaination for that...?

Oh and while we're at it; I guess you'd like to believe nothing ever good was developed by a skilled developer without them first swallowing all the theorectical textbooks; because the hazing mentality says -- you can't program if you can't regurgitate theory at a drop of a hat.
redqueencoder worked there for 1 year and 1 month for her very first job, after which she spent 5 months and the next job and 6 months at the next. If you call that evidence of being "significantly skilled", well, let's just say that we have different standards.
Now you're just clutching at straws; your imagination vs. reality. Brad Larson noted in more than one post that a large portion of the code was written by her; but I guess for you that doesn't count because she never studied up on Linked Lists or the like before attempting that. ...or maybe you question Brad's credentials, or his opinion, or his choice or ...
Russ seems to be a very good developer - I never said he wasn't. I don't think he has a very good understanding of the interview process though.
What a joke... Guess in your warped view only you and the Whiteboard Hazing Fanclub are the ones in the know. Everyone else who doesn't use hazing apparently screws up
You're arguing cross purposes, I don't disagree that regurgitating algorithms is a poor way to interview, the point is that it doesn't happens anywhere near as often as people like to believe. Equating "whiteboard coding" with regurgitating rote learned algorithms, is just stupid.
Cross purposes.. Err NO.. It's a BS way to conduct interviews, that only serves to prove that a candidate can regurgitate stuff written in some textbook; i.e. the monkey can read. Why the f..k would anyone recuse someone for something that can be quickly referenced in a textbook; FFS how long does it really take to understand Link Lists (if ever there was a situation that justified it's use -- care to nominate a situation that stipulates its use within an iOS Application?)
As Ross also explained both Google and Microsoft abandoned their "brain teaser" questions, because research never could corroborate the belief. Whiteboard hazing is just another form of this; BS that is unsupported by fact. Care to offer up research that correlates this BS.
There is no way that the number of times it shows up on google or twitter could possibly be used to gauge the proportion of times that this actually happens.
But apparently you have your mind set on hazing rituals, most likely because you at one point in your life bought into this BS. CS students fresh out of varsity will likely vomit this stuff out on demand; but programmers they're not; that takes experience -- the very thing that hazing BS ignores.
 
Last edited:

cguy

Executive Member
Joined
Jan 2, 2013
Messages
8,527
[)roi(];18503346 said:
Now who is being silly; fizzbuzz vs. linked list -- awful choice for comparison.
:wtf:
[)roi(];18503346 said:
rubbish... give me one practical example where Linked Lists are useful in iOS applications.
:wtf:
[)roi(];18503346 said:
Guess in your warped view only you and the Whiteboard Hazing Fanclub are the ones in the know. Everyone else who doesn't use hazing apparently screws up
:wtf:

[)roi(];18503346 said:
Why the f..k would anyone recuse someone for something that can be quickly referenced in a textbook; FFS how long does it really take to understand Link Lists (if ever there was a situation that justified it's use -- care to nominate a situation that stipulates its use within an iOS Application?)Apparently you have your mind set on hazing rituals, most likely because you at one point in your life bought into this BS. CS students fresh out of varsity will likely vomit this stuff out on demand; but programmers they're not; that takes experience -- the very thing that hazing BS ignores.

linked list == hazing ritual... :wtf:

You're arguing cross purposes, I don't disagree that regurgitating algorithms is a poor way to interview, the point is that it doesn't happens anywhere near as often as people like to believe.

[)roi(];18503346 said:
Cross purposes.. Err NO.. It's a BS way to conduct interviews, that only serves to prove that a candidate can regurgitate stuff
:wtf: :wtf: :wtf: /thread
 

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
:wtf:

:wtf:

:wtf:



linked list == hazing ritual... :wtf:




:wtf: :wtf: :wtf: /thread
So in the absence of an a real answer you simply add WTFs everywhere (a little childish)

Hazing ritual == whiteboarding == (in this case Linked Lists). I.e. "We had to do it to prove our worth, so we will from now, in perpetuity, expect every candidate to do the same". If you believe this to be the exception; then how about you share articles that support that view; basically I don't agree with you that WB used to assess textbook knowledge is the exception rather than the rule.

Justify the question ito its value in deciding one iOS developer is better than another; support the decision to use WB with factual research that corroborates this as a good way to filter out the best candidates. Clearly the "brain teasers" questions were abandoned because they could not. So how is this any different?
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
The comparison of fizzbuzz to linked lists is off kilter; equating something I can solve with a single line of code to what is essentially a request to define a new Type.
 

cguy

Executive Member
Joined
Jan 2, 2013
Messages
8,527
[)roi(];18503386 said:
So in the absence of an a real answer you simply add WTFs everywhere (a little childish)

You think your garbled rant warranted a response?

[)roi(];18503386 said:
Hazing ritual == whiteboarding == (in this case Linked Lists). I.e. "We had to do it to prove our worth, so we will from now, in perpetuity, expect every candidate to do the same". If you believe this to be the exception; then how about you share articles that support that view; basically I don't agree with you that WB used to assess textbook knowledge is the exception rather than the rule.

This is just a paranoid fantasy - one that postulates that most interviewers (who are also coders) are idiots.

[)roi(];18503386 said:
Justify the question ito its value in deciding one iOS developer is better than another; support the decision to use WB with factual research that corroborates this as a good way to filter out the best candidates. Clearly the "brain teasers" questions were abandoned because they could not. So how is this any different?

If it isn't obvious to you why the inability to implement one of the most basic data structures is a good indicator that someone cannot actually code, there's not much more I can do.
 

cguy

Executive Member
Joined
Jan 2, 2013
Messages
8,527
[)roi(];18503402 said:
The comparison of fizzbuzz to linked lists is off kilter; equating something I can solve with a single line of code to what is essentially a request to define a new Type.

The salient point of fizzbuzz isn't the number of lines it can be solved in, or whether or not it is a type, it's that it is rudimentary.
 

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
You think your garbled rant warranted a response?



This is just a paranoid fantasy - one that postulates that most interviewers (who are also coders) are idiots.



If it isn't obvious to you why the inability to implement one of the most basic data structures is a good indicator that someone cannot actually code, there's not much more I can do.
Again you miss the obvious; knowing the solution to a theory based question, rudimentary or not, is simply a rubbish way to single out the best iOS developers + don't just ignore in this case the weird insistence that the WB had to be done in Java; which to me implies that someone who never wrote a single line of iOS code could pass the WB test. Why? because not permitting the WB in either Objective-C, Swift or C implies that the people conducting the interview were ill suited to measure iOS skills.

But let's get back to the bigger question of why WB should be seen as paramount, when for example: "brain teasers" were discontinued because they could not measurably prove anything.
I.e. How does a WB test covering stuff you can swot for, prove that you are a skilled developer? vs. somebody who just swotted before interview. Can you even make the distinction between the two, simply on the answer to a WB question.
 
Last edited:

skimread

Honorary Master
Joined
Oct 18, 2010
Messages
12,419
The article might be applicable to the top Silicon Valey companies with $200K+ salaries. In South Africa companies really just want to find out if candidates are putting BS on their CVs.
 

Beachless

Executive Member
Joined
Oct 6, 2010
Messages
6,003
I quite agree with the topic. You find that most interviewers go online read a few articles then copy and paste the questions they like in their list of questions to ask. In most cases they only use the questions that they know the answers to and topics they have experience in.
This will get you a candidate that has done the same and gone online to read the articles and memorise the answers.
A really good interviewer will ask a candidate what they worked on previously and then base the questions on that experience but that takes someone who has a very broad experience base or a panel of people.
 
Last edited:

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
The article might be applicable to the top Silicon Valey companies with $200K+ salaries. In South Africa companies really just want to find out if candidates are putting BS on their CVs.
In this case it was certainly not in that salary bracket; for her skills I would imagine it would around $100K + that it would have probably been based in Michigan and not SV.

But that's not the point. The point is why these style WB questions are used in interviews when all it can prove is:
"they can read an algorithm and memorize it. They can learn which data structures to apply to which word problems."

Not forgetting, that introverts would probably be more inclined to fail at any test expecting them to perform in front of a group. How natural is this? E.g. How often in a day of programming are you expected to WB program your way to solutions?
 

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
I quite agree with the topic. You find that most interviewers go online read a few articles then copy and paste the questions they like in their list of questions to ask. In most cases they only use the questions that they know the answers to and topics they have experience in.
This will get you a candidate that has done the same and gone online to read the articles and memorise the answers.
A really good interviewer will ask a candidate what they worked on previously and then base the questions on that experience but that takes someone who has a very broad experience base or a panel of people.

Of what I consider to be far better approaches to this; usually would involve multiple interview steps where the person not only answers questions on what they've done and why (their experience and thought process), but also are expected to write programs on a computer (not WB) and for the introverts not even expected to perform in front of a group e.g. Problems are given and the person can complete it at home, further to this there is exposure to the various parts of the team that you would work in, allowing not only the team members to play a role in the selection process but also for the candidate to have a clear picture of the work environment.

Surprising some of the bigger SV companies have some of the worse interview practices -- fortunately not all of them.
 

cguy

Executive Member
Joined
Jan 2, 2013
Messages
8,527
[)roi(];18503614 said:
Again you miss the obvious; knowing the solution to a theory based question, rudimentary or not, is simply a rubbish way to single out the best iOS developers + don't just ignore in this case the weird insistence that the WB had to be done in Java; which to me implies that someone who never wrote a single line of iOS code could pass the WB test. Why? because not permitting the WB in either Objective-C, Swift or C implies that the people conducting the interview were ill suited to measure iOS skills.

Perhaps it's because she claims to have completed a course on Advanced Java Development? If it's on the resume, it's fair game.

[)roi(];18503614 said:
But let's get back to the bigger question of why WB should be seen as paramount, when for example: "brain teasers" were discontinued because they could not measurably prove anything.
I.e. How does a WB test covering stuff you can swot for, prove that you are a skilled developer? vs. somebody who just swotted before interview. Can you even make the distinction between the two, simply on the answer to a WB question.

Brain teasers are toy questions, while working on a white board with someone, communicating concepts in order to try to solve a technical problem, is actually a pretty standard day in the office. I'm not sure why you think they're equivalent.

Yes, I can make a distinction because we ask many algorithmic problems, and none of them reduce to just implementing some well known algorithm.
 
Last edited:

Beachless

Executive Member
Joined
Oct 6, 2010
Messages
6,003
[)roi(];18503684 said:
Of what I consider to be far better approaches to this; usually would involve multiple interview steps where the person not only answers questions on what they've done and why (their experience and thought process), but also are expected to write programs on a computer (not WB) and for the introverts not even expected to perform in front of a group e.g. Problems are given and the person can complete it at home, further to this there is exposure to the various parts of the team that you would work in, allowing not only the team members to play a role in the selection process but also for the candidate to have a clear picture of the work environment.

Surprising some of the bigger SV companies have some of the worse interview practices -- fortunately not all of them.

I dont think there is really any perfect solutions as a company you dont really need to care if some great people slip through the cracks as long as what you do gets you good results you will be OK. Fortunately there are different companies and as a canditate you should find something and if you struggle you need to play the game figure out what works and gain those skills. Even if you have to fake it for a few hours.
 

cguy

Executive Member
Joined
Jan 2, 2013
Messages
8,527
I quite agree with the topic. You find that most interviewers go online read a few articles then copy and paste the questions they like in their list of questions to ask. In most cases they only use the questions that they know the answers to and topics they have experience in.
This will get you a candidate that has done the same and gone online to read the articles and memorise the answers.
A really good interviewer will ask a candidate what they worked on previously and then base the questions on that experience but that takes someone who has a very broad experience base or a panel of people.

I certainly agree that one should explore a candidate's prior experience, and ask them questions on that, however, it is also important to additionally ask some standardized questions (which change regularly, and are local to the hiring group for the reasons you mention). I am sure that many interviewers go online and find some poorly thought out standard questions to ask - bad interviewers, and bad companies exist - not much one can do about that, but it's usually self correcting (companies that hire crap developers eventually develop crap and close).
 

cguy

Executive Member
Joined
Jan 2, 2013
Messages
8,527
I dont think there is really any perfect solutions as a company you dont really need to care if some great people slip through the cracks as long as what you do gets you good results you will be OK. Fortunately there are different companies and as a canditate you should find something and if you struggle you need to play the game figure out what works and gain those skills. Even if you have to fake it for a few hours.

This is a good point - some people (like the author of the article) seem to believe that the biggest problem that companies face when hiring is losing out on great developers because they don't fit the mold. They reality is that it is far worse to hire lousy developers than it is to just pass and find the next candidate.
 

[)roi(]

Executive Member
Joined
Apr 15, 2005
Messages
6,282
Perhaps it's because she claims to have completed a course on Advanced Java Development? If it's on the resume, it's fair game.
By your measure I should expect to WB problems in a whole group of languages I used to code in, but haven't used for over a decade or more. To be honest, with that ridiculous notion even I would be flustered; I can certainly read the code, but trying to WB a new routine in a language I don't regularly code in, would not only classify to me as hazing, but would certainly strongly encourage me to end the interview. Conclusion -- terminated interview, because they asked me WB solutions in Cobol,RPG,ML,PL/I,... for an iOS job.
Brain teasers are toy questions, while working on a white board with someone communicating concepts in order to try to solve a technical problem is actually a pretty standard day in the office. I'm not sure why you think they're equivalent.
  • Questions were naturally repeated; meaning they were at some point published and answers well debated -- i.e. just like the swotting for theory; candidates were able swot for "Brain teasers".
  • Solving a brain teaser like testing theory had no direct correlation to experience or ability.
Yes, I can make a distinction because we ask many algorithmic problems, and none of them reduce to just implementing some well known algorithm.
Yet you argue the point for the examples I gave, where this clearly was not the case. For all your attempts at making this sound viable, you really are describing what sounds like something in between pure theoretical WB and "brain teasers". Neither which have any basis in fact (that I know of) for their accuracy in interviews. You're as I said, welcome at any stage to share material to the contrary.
 
Top