Speaker: Inequality: Too big to ignore
104 Responses
First ←Older Page 1 2 3 4 5 Newer→ Last
-
BenWilson, in reply to
my question is, if so, what would alternative tools look like if they were instead developed with a virtue ethics or a deontological bias?
Do you have any suggestions? My feeling is that it would be extremely difficult, as deontological systems are inherently rule based. They don't even admit of any objective function, so when making decisions amongst multiple permissible actions they don't have much to say about what would be the best one. In other words, they're OK for making laws, but when it comes to economic management, it's hard to see that they provide any kind of framework.
I guess you could try to define tools that analyze the extent to which changes to economic settings increase or decrease the likely violations of rules. But that's mostly geared towards making a lawful, society, rather than a happy or wealthy one.
If you provide an objective function to maximize, so as to get a deontological system to address this, I think that really what you've done is change it to a consequentialist system. Consequentialists can, after all, be rule-based too. Rule Utilitarianism, for instance.
Hence my challenge to show how deontology is not subsumed into consequentialism, if you want to make it actually make practical economic decisions.
You could, for instance, define an objective in a virtue ethics framework, of minimizing deviation from a golden mean in each identified virtue. You'd probably want to define how that applied across more than one person, how you add up deviations to get a single number of total deviation at the end. You'd have to define your loss function. Then you could look at how economic settings affect that total, and aim for the ones that minimizes it. Then you'd have an economic system aimed at maximizing social virtue. But I struggle to see how that's really much different to a Rule Utilitarian framework, in which the various contributors to overall "goodness" are defined by a bunch of rules and their level of violation, coupled with the general objective of maximizing happiness. They just look like different implementations of the same thing, and they both look consequentialist.
-
BenWilson, in reply to
>That risks confusing a moral science for a natural one.
He seems to be saying the existing ‘toolkit’ isn’t necessarily up to the job and relying solely on these commonly used tools, “risks confusing a moral science with a natural one”.
I think he's making quite a profound point here. He saying that it's easy to confuse the ability to optimize to some objective with the goodness of doing so. The first is a purely technical thing and often very, very hard, the work of lifetimes. The second is a moral choice. It is disputable whether there even are experts on moral choice. It's something that that sits at the meta-level for economics, something that they don't actually get to decide for us. It is not down to economists to decide whether is a poor but equal society is worse than a richer but unequal one. That's something for society itself to decide. It's not something an optimization tool can decide for us.
This is true even in more pedestrian optimization scenarios, where the choices aren't particularly moral. I can (and have) designed systems that optimize to constraints. But I don't actually get to decide either the constraints OR the objective function itself. If the system produces an answer, the most I can really say is "This is close to the best IF you have correctly identified the constraints and objective". I can also give extremely useful information in the post-optimal analysis (I came to think of this as actually the most useful thing any optimization tool could do) about the extent to which the constraints push upon the objective, so that decisions about how to change the constraints can be made on an informed basis. But actually changing the constraints themselves, or deciding on the objective was well outside my brief as a mere technician.
For instance, in a transport optimization scenario, I can suggest an objective that is entirely about minimizing cost in dollars. But the management might decide their objective actually involves distributing the work fairly amongst the drivers as a partial goal. So be it, that's not my call. All I can say is what that might cost, not whether it's a good or bad thing to do.
I say this having been in this exact situation. So I can fully appreciate what Makhouf is saying about the risk of the confusion. Certainly I was in a very powerful position to make the call myself about what happened for the drivers, and subject to pressures from all sides. And my own interests were in some senses in direct conflict, since our system had been challenged to provide a certain level of saving, with financial incentives tied directly to that. Of course having the drivers happier came at some cost to that. If I was a particularly Machiavellian character, I'd have simply pushed management to tell the drivers to get stuffed and put up with the nasty change in their working conditions. Instead....well I digress. I didn't do that, but I can fully see that there's a blurry boundary between designing/applying the tools one has to achieve an end, and deciding on the ends themselves. By the time you have done that kind of analysis, built that kind of tool, there really is no one more knowledgeable about what it can do, or how it could be better used, so you do quite easily slip into making decisions that aren't really yours to make.
-
Katharine Moody, in reply to
Do you have any suggestions?
That would be putting the cart before the horse. – given I/we don’t even know yet whether the basic assumption (i.e., that most economic tools are grounded in consequentialist ethics) has any merit.
I prefer to think of deontological ethics as being ‘duty’ based – and from duty flow rules. For example, say we as a society accept that we have a duty to future generations. There are then no ‘trade offs’ – there is only adherence to that duty in respect of our social choices/economic decisions. The question is whether present-day modelling tools exist in order to make an analysis with this ethical bias.
Similarly, I don’t see an exercise in finding the golden mean, say in terms of dairy production, as involving trade offs. If you take the marginal cow analysis – it’s a win/win .. profit is maximised and excess production (at a lower return) is avoided. For me anyway, it looks like a ‘golden mean’ – the midpoint between excess and deficiency. I only use this example given the government’s objective to double primary industry exports by 2025. I assume they have done some kind of economic analysis to support such an objective – and that the trade offs associated with such doubling have been identified (surely there must be some). Hence, would it not also be interesting if we could model such an objective using tools appropriate to a different ethical framework.
-
Katharine Moody, in reply to
The second is a moral choice.
Yes, but if economics is a moral science, then (one assumes) the science of it must be grounded in a moral framework .. and there are numerous moral frameworks.
Mary Midgley wrote an interesting book called, "Can't we make moral judgements?" It goes to the heart of the fact/value distinction that is so dominant to our current way of thinking. It's a worthwhile read.
Post your response…
This topic is closed.