Question

Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Can transparency contribute to restoring accountability for such systems? Arguments for and against include issues such as the loss of privacy when data sets become public, the perverse effects of disclosure of the very algorithms themselves (which can lead to ‘gaming the system’), the potential loss of competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms are inherently non-transparent. It is concluded that transparency is certainly useful, but only up to a point: extending it to the public at large is normally not to be advised. Do you agree?

1. Present an introduction (or background) to your topic and your essay (5 marks).

2. Compare and contrast professional ethics from other types of ethics (10 marks).

3. Include examples of professional Codes of Ethics to support your answer (5 marks).

4. Present a conclusion that briefly outlines your point of view (5 marks).

5. Ensure that your essay is well written and structured (5 marks)

0 0
Add a comment Improve this question Transcribed image text
Answer #1

- Accountability includes a lot more to do with the raising concern of decision-making, reinforced by the help of pertinent algorithms and machine learning is the mechanism with which algorithms for a broad array of practices are being established.


- Total transparency seems to be the ideal for restoring accountability for algorithmic systems. After all, whenever parties or institutions have to be called to account, the “raw data” of the whole process are to be accessible for analysis.

- Machine learning is certainly often concerned all through the stages of data collection, model construction, and model use.

- In the literature, four main types of case are to be identified:

(1) Privacy:
Sensitive data could probably leak into the public.

(2) Perverse Effects:
Transparency is a challenge to game the system, making it effectively worthless.

(3) Competition:
Disclosure could damage enterprises competitive edge.

(4) Inherent Opacity:
Published algorithms are often difficult to understand, so not much insight is achieved.

- Here, the first three arguments alert us to desirable harms resulting from exposure, while the fourth argument alerts us that the advances of transparency may be reduced.

- In particular, at present accountability is not substantially promoted by causing the public at large the beneficiary of generous disclosure only oversight bodies should retain full transparency.

- As far as public interests (broadly defined) are involved, society no longer accepts that algorithms remain completely confidential information, and thus unaccounted for.

- Balancing this provision with the good instinct of companies, an adequate compromise is to set full transparency to intermediate parties only.

- Oversight boards, tied to secrecy, may supervise and regulate the numerous algorithms the public is subjected to and dependent on.

- After this first step, private companies may be forced at a later stage to agree to the same rule of transparency and open up their algorithms to oversight boards as well.

- Analyze the idea of accountability in case of full transparency:

Step 1: Data Collection

- Datasets have to be gathered which are to provide as input to the process of machine learning.

- The basic requirement is that data are relevant to the questions being asked.

Step 2: Model Construction

- The data available are utilized as training material for machine learning.

- The approaches used are different e.g., classification and decision trees, support vector machines (SVMs), ensemble methods, neural networks, etc.

- A convenient model gets constructed that best fits the data. Such a model evolves step by step, its error ever diminishing.

- Models are required for purposes of prediction.

Step 3: Model Use

- Upon integration, the model is ready to be used for making decisions.

- Decisions can now be made supported by the algorithm developed, by no means it is suggested that these are to be accepted in fully automated fashion.

- As a rule, there is a lot of possibilities: from mainly human to fully automated decision-making. Depending on the specific context at hand, one or other solution may be optimal.

- Proper accounting should provide a reasoned report about and explanation for the preferred levels of automation in decision-making.

- First, for the sake of privacy it would be inappropriate to make underlying datasets freely accessible to everyone, it would result to an invitation for violations of privacy.

- Secondly, full transparency regarding the machine learning models in practice may lead those concerned to game the system and thereby undermine its efficiency.

- As long as indicators in use remain non-robust against manipulation, the only remedy as still is, excluding the public at large from obtaining full transparency.

- The same conclusion refers to models that may lead to stigmatization; restricting their disclosure to oversight bodies alone seems indicated.

- Thirdly, as a rule, companies emphatically emphasize their property rights on algorithms.

- As long as this attitude extends to be carried, the only practical alternative to complete opacity is, transparency limited to intermediate parties involved in oversight (concerning algorithmic decision-making for public purposes, to begin with).

- The three counter-arguments can be identified to work in unison towards similar kinds of restrictions. Bombarding the public at large to be the beneficiary of full transparency would, so to say, perversely affect accountability.

- We should normally satisfy ourselves with oversight bodies enjoying full transparency, they are to achieve the task of calling algorithmic systems to account in our stead.

- In conjunction, affected individuals should be allowed to obtain a complete explanation about decisions that concern them.

- Oversight bodies may serve to ensure that those accounts are easily procured.

- Algorithms in use frequently will be constructed as robust against manipulation and comprehensible by design, room will be available for enlarging transparency further.

- Then it would make sense for algorithms (and their results) to be opened up for perusal to the public at large.

Add a comment
Know the answer?
Add Answer to:
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Can transparency contribute...
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for? Ask your own homework help question. Our experts will answer your question WITHIN MINUTES for Free.
Similar Homework Help Questions
ADVERTISEMENT
Free Homework Help App
Download From Google Play
Scan Your Homework
to Get Instant Free Answers
Need Online Homework Help?
Ask a Question
Get Answers For Free
Most questions answered within 3 hours.
ADVERTISEMENT
Active Questions
ADVERTISEMENT