Free download. Book file PDF easily for everyone and every device. You can download and read online Reframing Humans in Information Systems Development file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Reframing Humans in Information Systems Development book. Happy reading Reframing Humans in Information Systems Development Bookeveryone. Download file Free Book PDF Reframing Humans in Information Systems Development at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Reframing Humans in Information Systems Development Pocket Guide.
Isomäki, Hannakaisa

IOP Publishing. Rage against the machine? Google's self-driving cars versus human drivers. Journal of safety research , 63, Integrating shared autonomous vehicle in public transportation system: A supply-side simulation of the first-mile service in Singapore. Franken-algorithms: the deadly consequences of unpredictable code. The Guardian , A very concise and clear presentation of the arguments. They have been well-organised into the respective categories and each argument poses a very logical deduction as to how the conclusion was attained.

I do agree with all the points on the benefits and dangers of automated algorithms replacing humanity. If I were to provide an area for improvement, it would be to emphasise more clearly on the point of the title - the reframing of the knowledge of driving. Reframing implies that there was a previous framing of knowledge. Hence, I think that it would be good to state it out. Some ideas off the top of my head would be:.

What makes knowledge more valuable than others? Who decides this and what are the criteria for 'valuable'?


  1. Reframing Humans in Information Systems Development | SpringerLink.
  2. Closing the Digital Divide: Transforming Regional Economies and Communities with Information Technology.
  3. FrameWorks Institute.
  4. Politics, Policy and Power in Translation History?
  5. Three Songs, Three Singers, Three Nations: The William E. Massey Sr. Lectures in the History of American Civilization 2013?
  6. Experiences of Mental Health In-patient Care: Narratives From Service Users, Carers and Professionals.
  7. Managing Your Multiple Sclerosis.

Something that you can choose to explore. Other thing that you can also choose to explore is the legal framework in terms of who is to blame for an accident? Biases is a huge consideration when designing any technology - the technology itself is neutral, however, it is always either the user, creator or both that contaminates its purity. This can also be linked to knowledge - who has the epistemic authority in deciding the right or wrong in an algorithm?

See a Problem?

The user or producer? Lastly, if an AV can help us make the decisions, but yet we suffer the consequences, does that mean that our autonomy is undermined? Can link to Lynch Chapter 5. Hi Zach! Thank you so much for your insightful comment! I definitely agree with you that the previous framing of driving knowledge is essential and thus I have added how society has valued driving since the past which thus framed the current state of driving knowledge.

Next, I'm also glad that you brought up the need to define what I meant by "value", as it will not be clear for a reader who has little knowledge in this topic. Furthermore, I think it was an interesting point about how technology is neutral, and hence I included a short analysis regarding how the potential conflict of interests between the AV's creator and user may result in an unsolvable decision as to who gets to decide what the AV should know. Lastly, I have also edited the last part on social impacts with the concept of autonomy, as I agree with you that hesitation to accept the technology arises from uncertainty in the consequences of giving control to a non-human.

All in all, I appreciate that you have provided a different perspective in viewing this topic and I have learnt a lot from your comment. A well-organised piece with arguments both on the pros and cons of AVs. There were very interesting discussions on the reliability, efficiency and accountability of AVs, which are all very good criteria for comparing with human drivers.

There was one especially refreshing point where you brought up how human errors are unique to individuals, and by comparison machines are more vulnerable because any error say an error in the algorithm may potentially produce larger impacts. I think this is really interesting, and in my own paper I also talked about how the consistency of the execution of algorithms may lead to more consistent and hence, worse errors than humans. In the section "Potential Social Impacts to Consider", you mentioned that people may feel uncomfortable about how human knowledge is being "reduced" into algorithms.

This is an interesting statement in my opinion, but it occurred to me that the described situation has always been happening ever since the start of programming, not just with AVs, but with many other everyday things. It seems to me that we humans have drastically varying levels of acceptance depending on what we are talking about. For example, we seem to be much more comfortable and in fact, thankful about the invention of the calculator which internally uses some algorithms for computation , but not so much with AVs. Indeed, we might not like the idea of being replaced by machines, but why are we not so concerned that we are losing our mathematical abilities by using calculators?

With the fact that AVs are still relatively new technologies unlike calculators which have been tried and tested, demonstrating consistently better performance than humans , we hesitate even more to trust the algorithms. Hi Lizhi!


  • Leading the Project Revolution: Reframing the Human Dynamics of Successful Projects.
  • Mechanisms of High Temperature Superconductivity: Proceedings of the 2nd NEC Symposium, Hakone, Japan, October 24–27, 1988.
  • Psychotherapeutic Diagnostics: Guidelines for the New Standard.
  • Reframing Humans in Information Systems Development | E-bok | Ellibs E-bokhandel;
  • Addiction and self-control : perspectives from philosophy, psychology, and neuroscience?
  • Reframing the AI Discourse.
  • You have made good observations of the post as those were some of my favourite points too! I definitely agree with you that how fearing the effects of technology plays a big part in the acceptance of technology. The term "reduced" has a skeptical connotation to convey the uncertainties in the outcomes of AV, especially since driving knowledge is perceived to be very "human", given that intuitive skills are often being utilised.

    It is therefore logical to be fearful of the technology especially when the user's lives are at stake. I have included a point in the social impact section in response to your opinions on the term "reduced". Technology and the Fate of Knowledge. Furthermore, given that the knowledge of an AV comes from its programmer, it falls victim to the personal biases of its creator. However, AV users as consumers, will rather customize their AVs according to their own interests.

    Consequently, a grey area exists as to who should be the epistemic authority in deciding which ethical framework an AV should adopt. The thought of a unique human knowledge being modeled in a machine may cause some to feel uncomfortable, as human knowledge that was once perceived to be so complex and abstract is being reduced to a mere set of algorithms. This may have stemmed from the fear of the effects of this relatively new technology, especially since we are putting what we deem as important — lives, beliefs and even our own knowledge — at stake.

    Imagine a hypothetical situation whereby AVs can easily replace human drivers, driving knowledge will then be made redundant as this skill can be outsourced to machines at lower costs. In addition, if we were to let technology make decisions in our place and yet we still suffer the consequences, there seems to be a notion of undermining the human autonomy [8]. Ultimately, in the process of designing algorithms that aim to display knowledge using technology, humans are essentially learning about themselves.

    Developing MIS Systems

    Artificial Intelligence and Knowledge. No Need to Learn from Each Other? Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations. IOP Publishing. Rage against the machine?

    Google's self-driving cars versus human drivers. Journal of safety research , 63, Integrating shared autonomous vehicle in public transportation system: A supply-side simulation of the first-mile service in Singapore. Franken-algorithms: the deadly consequences of unpredictable code. The Guardian , A very concise and clear presentation of the arguments. They have been well-organised into the respective categories and each argument poses a very logical deduction as to how the conclusion was attained.

    I do agree with all the points on the benefits and dangers of automated algorithms replacing humanity. If I were to provide an area for improvement, it would be to emphasise more clearly on the point of the title - the reframing of the knowledge of driving. Reframing implies that there was a previous framing of knowledge. Hence, I think that it would be good to state it out.

    Some ideas off the top of my head would be:.

    Los Angeles Reframing Healthcare Design Event | General Assembly

    What makes knowledge more valuable than others? Who decides this and what are the criteria for 'valuable'? Something that you can choose to explore. Other thing that you can also choose to explore is the legal framework in terms of who is to blame for an accident?

    Bibliographic Information

    Biases is a huge consideration when designing any technology - the technology itself is neutral, however, it is always either the user, creator or both that contaminates its purity. This can also be linked to knowledge - who has the epistemic authority in deciding the right or wrong in an algorithm? The user or producer? Lastly, if an AV can help us make the decisions, but yet we suffer the consequences, does that mean that our autonomy is undermined? Can link to Lynch Chapter 5. Hi Zach! Thank you so much for your insightful comment! I definitely agree with you that the previous framing of driving knowledge is essential and thus I have added how society has valued driving since the past which thus framed the current state of driving knowledge.

    Next, I'm also glad that you brought up the need to define what I meant by "value", as it will not be clear for a reader who has little knowledge in this topic. Furthermore, I think it was an interesting point about how technology is neutral, and hence I included a short analysis regarding how the potential conflict of interests between the AV's creator and user may result in an unsolvable decision as to who gets to decide what the AV should know.

    Lastly, I have also edited the last part on social impacts with the concept of autonomy, as I agree with you that hesitation to accept the technology arises from uncertainty in the consequences of giving control to a non-human. All in all, I appreciate that you have provided a different perspective in viewing this topic and I have learnt a lot from your comment.

    Navigation

    A well-organised piece with arguments both on the pros and cons of AVs. There were very interesting discussions on the reliability, efficiency and accountability of AVs, which are all very good criteria for comparing with human drivers. There was one especially refreshing point where you brought up how human errors are unique to individuals, and by comparison machines are more vulnerable because any error say an error in the algorithm may potentially produce larger impacts. I think this is really interesting, and in my own paper I also talked about how the consistency of the execution of algorithms may lead to more consistent and hence, worse errors than humans.

    In the section "Potential Social Impacts to Consider", you mentioned that people may feel uncomfortable about how human knowledge is being "reduced" into algorithms. This is an interesting statement in my opinion, but it occurred to me that the described situation has always been happening ever since the start of programming, not just with AVs, but with many other everyday things.