Page 86 - Ai Book - 8
P. 86

GIGO Principle

            The GIGO principle states that the quality of output from a computer system is determined by the quality of its
            input data. In essence, if you input garbage data, you’ll get garbage results. This principle highlights the critical
            importance of accurate and reliable input for meaningful computational outcomes.
             u   GIGO stands for “Garbage In, Garbage Out.”

             u   If you provide incorrect or biased input to a computer, the output will be
                inaccurate.

             u   Applies to AI systems as well, as the quality of the output depends on the
                quality of the input data.

            Significance of GIGO in AI

             u   Highlights the importance of using unbiased and high-quality data during the development stage of AI
                systems.
             u   Emphasizes that biased input can lead to discriminatory outcomes in AI decision-making.

             u   Understands the impact of biased data on AI systems through real-world
                examples like Amazon’s hiring AI.

             u   Streses the GIGO principle in the context of AI development to ensure fair
                and unbiased outcomes.

             u   Encourages awareness and  ethical  considerations  in  AI development to
                mitigate bias and promote inclusivity.
            PROBLEM OF INCLUSION IN AI SYSTEMS

            Data is the core of AI systems  that can make their own decisions without any type of human interference.
            Imagine AI systems as smart robots that learn from lots of information (data) to do their jobs. But sometimes,
            the data they learn from is not fair – it could be one-sided or unfair.
            For example, think about a robot that recognizes faces. If it’s trained mostly on light-skinned faces, it might
            not be great at recognizing darker-skinned faces. This can make people with darker skin feel like the robot isn’t
            treating them fairly.

























            So, the problem is that these smart robots can end up making decisions that are unfair because they learned
            from data that wasn’t fair to begin with. This is something we’re trying to fix so that everyone gets treated
            equally by these smart robots.


                86
                86
   81   82   83   84   85   86   87   88   89   90   91