(no subject)
Jun. 1st, 2004 11:47 amif i could indulge upon your patience... i would now like to bitch about my job.
a few months ago i wrote a program that collects the results of three databases and writes them to ~400 excel files. there are many factors. in this entry, the bit of data i'm bitching about belongs to the "quality team." They listen in on customer calls and rate them. each call is scored from 1 to 100. the quality team wanted me to pull all the scores and loop through each and test for certain standards... here are those standards:
(on a 1 to 100 scale)
>90 = WOW-"Gold Standard" (if i score this do i get a bottle of tequila?)
81-90 = excellent
61-80 = good
51-60 = just OK
<51 = poor
(*These stats have not changed... even with everything below, the standards still remain the same to this day.)
i was supposed to pay attention to "WOW" , "excellent" and "Poor"
of all the shops, 50% or more had to be either WOW or excellent, and none could be a Poor.
simple enough... i wrote this into the program
the first time i ran this, i received numerous accusations from the various supervisors that my program was flawed and that it was being "too easy" on people. 81% was way too low. excellent isn't good enough apparantly.
i went to quality and oopsie! apparantly i misunderstood where the line in the sand was to be drawn, it wasn't 80%, it was 90%. they call this number a "WOW %", so i'm only supposed to be looking at the "WOW's"
ok. not a problem, i must have misunderstood, and i made the change. i went to talk over these stats in person instead of sending an email (mistake) anyhow, that was two months ago.
today a new supervisor complained that the database that i draw the stats from makes it's own reports and that these reports list a drastically higher percentage of "Wow%" for his team (my report 27%, his 78%) my program, he concludes, is therefore farked up.
i looked at their report, looked at mine, ran the numbers, and whaddya know? same number of monitors, same scores. just the percentage is screwed. i do the math and my report draws the line at 90%, theirs.... at 85%.
WTF? 85% isn't a dividing line in their standards... and when you print the report from the problem database... it lists the standards i've referenced in a little table up top. the stupid report draws the line at 85% and says it draws the line at 90%! So it looks like my program is busted when it truly reports on a 90%. No one questions it because it looks okay to them! looks fine from where i live!
so back i go to quality and figure, somebody made a mistake with their report.... but oopsie! it turns out that i misunderstood again! silly me. although the magic line of 85% isn't on the standards anywhere, that is indeed where the division is supposed to be. didn't we tell you that?
WTF WTF WTF!
so now i'm supposed to change the dumbass code to reflect this BS standard that isn't supported by any documentation anywhere, and leave it at that and not expect this to come back and be interpreted as something different two months from now.
normally if i change the code, i comment as to why it changed, when it changed and who approved it. i then save the email asking me to make the change. but these standards come from when the program was written, so the comments aren't there, or the decisions were made in a face to face meeting, and handed to me on paper. since then we've had a building move and all paperwork relating to non-essential issues was destroyed. and the head of quality is pretending that she never told me to change the stats from 80% to 90% at all, so apparantly i made that up in an argument with some schizophrenic version of myself pretending to be the head of quality. and since the line was NEVER at 80% but ALWAYS at the imaginary 85%... which i magically should have known, in fact "i'm sure i told you," so i MUST have been schizophrenic when i programmed them into the code to begin with.
i feel like telling them that clearly SOMEBODY in this whole chain's sanity is at doubt, and until we figure out whose, then we shouldn't go altering the code anymore. the worst part is, no matter how this all comes out, i look like the irresponsible one, because i'm going to be putting out revisions to reports filed two and three months ago so that i can honestly reflect the situation on end-of-year reports, and that just sucks.
a few months ago i wrote a program that collects the results of three databases and writes them to ~400 excel files. there are many factors. in this entry, the bit of data i'm bitching about belongs to the "quality team." They listen in on customer calls and rate them. each call is scored from 1 to 100. the quality team wanted me to pull all the scores and loop through each and test for certain standards... here are those standards:
(on a 1 to 100 scale)
>90 = WOW-"Gold Standard" (if i score this do i get a bottle of tequila?)
81-90 = excellent
61-80 = good
51-60 = just OK
<51 = poor
(*These stats have not changed... even with everything below, the standards still remain the same to this day.)
i was supposed to pay attention to "WOW" , "excellent" and "Poor"
of all the shops, 50% or more had to be either WOW or excellent, and none could be a Poor.
simple enough... i wrote this into the program
the first time i ran this, i received numerous accusations from the various supervisors that my program was flawed and that it was being "too easy" on people. 81% was way too low. excellent isn't good enough apparantly.
i went to quality and oopsie! apparantly i misunderstood where the line in the sand was to be drawn, it wasn't 80%, it was 90%. they call this number a "WOW %", so i'm only supposed to be looking at the "WOW's"
ok. not a problem, i must have misunderstood, and i made the change. i went to talk over these stats in person instead of sending an email (mistake) anyhow, that was two months ago.
today a new supervisor complained that the database that i draw the stats from makes it's own reports and that these reports list a drastically higher percentage of "Wow%" for his team (my report 27%, his 78%) my program, he concludes, is therefore farked up.
i looked at their report, looked at mine, ran the numbers, and whaddya know? same number of monitors, same scores. just the percentage is screwed. i do the math and my report draws the line at 90%, theirs.... at 85%.
WTF? 85% isn't a dividing line in their standards... and when you print the report from the problem database... it lists the standards i've referenced in a little table up top. the stupid report draws the line at 85% and says it draws the line at 90%! So it looks like my program is busted when it truly reports on a 90%. No one questions it because it looks okay to them! looks fine from where i live!
so back i go to quality and figure, somebody made a mistake with their report.... but oopsie! it turns out that i misunderstood again! silly me. although the magic line of 85% isn't on the standards anywhere, that is indeed where the division is supposed to be. didn't we tell you that?
WTF WTF WTF!
so now i'm supposed to change the dumbass code to reflect this BS standard that isn't supported by any documentation anywhere, and leave it at that and not expect this to come back and be interpreted as something different two months from now.
normally if i change the code, i comment as to why it changed, when it changed and who approved it. i then save the email asking me to make the change. but these standards come from when the program was written, so the comments aren't there, or the decisions were made in a face to face meeting, and handed to me on paper. since then we've had a building move and all paperwork relating to non-essential issues was destroyed. and the head of quality is pretending that she never told me to change the stats from 80% to 90% at all, so apparantly i made that up in an argument with some schizophrenic version of myself pretending to be the head of quality. and since the line was NEVER at 80% but ALWAYS at the imaginary 85%... which i magically should have known, in fact "i'm sure i told you," so i MUST have been schizophrenic when i programmed them into the code to begin with.
i feel like telling them that clearly SOMEBODY in this whole chain's sanity is at doubt, and until we figure out whose, then we shouldn't go altering the code anymore. the worst part is, no matter how this all comes out, i look like the irresponsible one, because i'm going to be putting out revisions to reports filed two and three months ago so that i can honestly reflect the situation on end-of-year reports, and that just sucks.