Posts by Tze Ming Mok
Last ←Newer Page 1 2 3 4 5 Older→ First
-
Speaker: ‘Kiwimeter’ is a methodological…, in reply to
Final nerd point: though as linger points out, a pilot survey does not need to be a representative sample, in this case the '6 archetypes' that were based on that pilot/precursor survey, were accompanied within the 'Kiwimeter' by a percentage of the country they were meant to represent. But no disclaimer that there was no real representative sampling behind those percentages. These are actually misleading claims that are still being touted about. See Barry Soper in the Herald proudly proclaiming that his 'patriot' group is 36% of the country. Even worse, you get TVNZ reporting things like 'sport is the most important thing for New Zealand identity' based on the self-selective 'Kiwimeter' survey. None of this is accurate. The best you can call it is a 'good guess' based on demographic weighting. But of course, given that they fucked up the execution, you can't even really call data coming from the 'Kiwimeter' itself 'good' - more like a 'fatally compromised guess'.
-
Speaker: ‘Kiwimeter’ is a methodological…, in reply to
You would think that would have been their first line of defense if the numbers looked right. However, I suspect (just a hunch, not based on my experience of online self-selective surveying in NZ) they would be needing to upweight Maori participants in the first place, for any online survey.
-
Speaker: ‘Kiwimeter’ is a methodological…, in reply to
The points made by Peter Davis are essentially what I covered in my first post on this matter, which is that we need to study the prevalence of racist attitudes by, inevitably, testing racist statement in surveys. I am not suggesting here that a research ethics committee needs to intervene into/block research like this; but that in this case the researchers did not think as hard as they normally should be expected to, about the likely impact of the reduced schedule of Kiwimeter questions .This is actually likely because they relied on their 'usual' approach despite the delivery of the survey being far removed from the context of say, the way the NZAVS is presented to respondents. These are duties researchers have to themselves and their own standards, not necessarily something that needs policing by an external REC.
-
Okay wow, the Director of Vox replied, and confirmed that he does not accept the validity of any feedback from Maori so far, about their decreased likelihood of filling in the survey on ideological grounds, because feedback is "anecdotal" and "sampling on the dependent variable". He does not seem to understand that *any* qualitative evidence of groups selecting-out of the survey on these grounds is a problem for the survey, *precisely because* it cannot be quantified. This is like a stereotype of a quant guy who does not understand the purpose of qualitative research in the context of survey design, i.e. that often a survey will be useless without it. Also he does not seem to want to acknowledge the effect that the media coverage is also likely to have on selecting out of the survey. Also tries to cover himself by saying that cognitive testing is not common in "academic research", I guess this is why he hadn't heard of it oh dear. I'm getting the feeling that this guy is a freakin' amateur.
-
Speaker: ‘Kiwimeter’ is a methodological…, in reply to
EXACTLY, YO.
-
Speaker: The real problem with the ‘Kiwimeter’, in reply to
The survey I did (promoted on Waitangi Day!) was run by http://www.voxpoplabs.com/ who seem to specialise in these exercises. It was about 15 minutes to do, very long, and the questions were much as the present Kiwimeter has, including the "special treament" questions.
I complained in the feedback form at the end of the survey, and received an answer or two, which I wrote about here. There was no acknowledgement that they might have screwed up. They think their questions are neutral.
If this was the total of their piloting, that's a sad-ass yet totally typical level of corner-cutting. Credible social research organisations routinely do cognitive piloting for large-scale surveys. This is where you recruit a broad range of people, sit down with them and essentially do qualitative interviewing about what is going through their heads when they fill out a survey. This is where you find out whether your survey *really* works, or whether it's going to look shady and kinda racist. It's not even expensive to do. Ugh, the incompetence.
-
This is a very poorly thought-out question. It may well bring out the 'shitty attitudes' - but how would you have any idea what people meant when they agreed or disagreed?
I agree the 'scale' is wrong (and while it's not completely 'useless' as a way to gather opinions it does actually make people think the question is insulting, predetermined and racist), but the exact statement is the right one to be testing opinions to. As I mentioned in comments, a better scale would have been 'how acceptable or unacceptable do you find this statement?' or maybe something like 'how likely is it that I would say something like this?' or 'how alike am I to a person who says this?'
The whole thing is poorly thought out. The more I think about many aspects of it, the worse and worse it gets. They're really boasting now about 130,000 respondents? Congratulations, that's 130,000 respondents worth of worthless data that doesn't mean anything!
-
Speaker: The real problem with the ‘Kiwimeter’, in reply to
I was part of the initial survey, I think. Done through Horizonpoll, certainly not called Kiwimeter at the time.
There's so little information about the original survey - who conducted it, what the questions were, how long it was, how it was carried out... I'd be interested in how you thought the two compared.
-
Speaker: The real problem with the ‘Kiwimeter’, in reply to
TVNZ fails to acknowledge the important distinctions between a self-selecting poll, and a properly sampled survey. The latter has much greater validity than the former.
Exactly. If you start with the equivalent of a 'Herald online poll', you can't weight your way to a representative sample. You can only make it slightly less crap than an unweighted sample.
Vox Labs tweeted me to say that they are weighting the online Kiwimeter results. Cool story bro. Not going to help the massive self-selection problem of people selecting out of a survey about attitudes to national identity, based on their pre-existing attitudes to national identity...
(a bit hot on this at the moment as I am currently teaching a Media Research course).
PREACH, BROTHER.
-
Speaker: The real problem with the ‘Kiwimeter’, in reply to
Actually, the introduction I saw said the survey is over, and that whatever they have on their website now simply invites us to profile ourselves against the results of that survey. Kina like the political compass thing a few years ago.
That's what I thought as well, except TVNZ is actually now reporting on the online survey results as the results of the 'Kiwimeter', touting sample sizes of 80-90,000. Clearly not the original survey of 10,000. So that's not what it seemed. The Privacy tab of the online quiz stated that TVNZ would be allowed to report your data in news coverage; although it didn't say that your actual answers comprised the 'Kiwimeter'.