We can move it to one side right just like seventh or eighth grader do right and we can move it to one side and get zero and then we can do this right and now because my so what should be zero here a minus lambda I has to be zero right because II cannot be zero I mean if I get vector is zero what is the point of doing this right a minus lambda I has to be zero right this so essentially in in matrix.

So to make this part 0 a minus lambda we can check whether the determinist value of a minus lambda I is equal to zero or not now what is this deterministic value you just multiply across and then come up with an answer something like this like if it's I think it's a 5 3 4 2 so you multiply 5 by 2 4 by 3 right I'd like that and then that becomes your value is it plus or minus I think it's plus right ok codementor you so you know the person you know the alright a but thank you so that's what we will do we will try to find out what is the deterministic value of this equation is now in this case I just need to find lambda right because E is not there there's only one variable I need to find out and with one equation it's not that difficult not that difficult.

So suppose this is my this is my M equation now so I have them I covariance matrix right which we calculated earlier then minus lambda and identity matrix right so so now we can like this right and then this this looks okay now we need to we can calculate the deterministic value and that's how it will look like I guess this should be minus not plus right.

So now we need to solve this equation right and it will come something like this right so what will be the value of lambda here this is quadratic equation simple right what should be the value of lambda here how many values of damndest we'll get here - right so because we I mean that matches with our piece number of pieces we want in a pc1 & pc2 so how many are how do we get it all right this is these are the values I just cheated them you know I looked at the answer and then said okay you know because it had to add up right so it's 28.6 and 13.8 that's the values of lambdas so we got two lambdas right then two lambdas here which means so what does lambda represents here these are the SS these actually like we talked about so there's no rotation of line that was you know made of thing then we have we can just do it through math using the quadratic equations.

We get lambda 1s as these are calculated without going into you know all D 1 Square D 2 square and D 3 square that is all done here right these are eigenvalues yes these are eigen values because eigen values are scalars scalars like individual numbers and eigen vector is anyway vector it has to be you know in that matrix and not matrix vector you know I know be interested vector one dimension data right this is zero dimension data scaler is zero dimension data right so so what does it tell us these are eigen values right and in this case it tells me it tells me the first eigen value captures I again I have I I don't care about the absolute values of lambdas what I am interested in is just percentage right I mean I am interested whether it's captures 50% 30% her.

So for here if you say if add up all these both values and did 28 by this thing then you will get 67% so lambda first lambda the higher lambda captures the 67% and the second one captures the 33% of the information right in the in our bit in in our data right so now so what we have I gain values now we need to calculate the eigen vector right so we will go back to this and we can now plug in our values suppose E is x1 and x2 is because we don't know what of the what it will and we know that it will have two numbers because it you know we are working with f1 and f2 two features so I every eigen vector will be of two features so this is the this is the equation and when we multiplied.

We okay when we replace lambda with 28.6 because that was the first one and this is what we will get and it says x1 x2 is equal to 1 and 1 I can we can look at the deterministic point right I mean II register so how it will do we do it 7 point 4 x2 minus 7 point 4 x1 and plus 7 point 4 x2 again is equal to 0 so this become minus fifth 14 point 8 x1 plus 14 point 8 x1 is equal to 0 so this is x2 so if I move this through the anyway 14.8 is common so minus x1 plus x2 is 0 so both has to be equal to get 0 answer right x1 x2 is equal to so you can say 1 1 because we are going to look at a unit vector.

So we can say 1 1 1 satisfies this equation any value but we are going to go with yes it can be five and five ten and ten hundred and hundred but remember our eigen vectors are unit vectors so we are going to keep first number as one and then go from there right because we are looking we want to ratio not that we are not interested in actual value rather the ratio between our underlying features so right so and so now this is my for the second one second one it will be 1 and minus 1 right 1 and minus 1 so now so which one is my PC one so I have now I have now two 2p pcs or two eigen vectors which one is PC 1 1 and 1 right 1 and 1 is PC 1 and 1 and minus 1 is PC 2 right so we got everything we wanted with simple maths it's not that difficult right one and minus one yes please ok why we took that is because in statistics the variance actually captures the in in a way because if your data is same for so let's not talk about K variance and covariance.

For example if you have two features both features is you know same number you know X 1 is 2 X 2 is also 2 X 2 X 3 X 1 is 3 X 2 is 3 what it means that when and what we do we use these data points or the features to differentiate between individual examples right if both numbers are coming same for each for is data point then what we can say X is not really useful for me I can just work with x1 why care about x2 because you know it's basically same as x1 so it's so so basically we want to.

If your number is same suppose all the values but now another example of suppose we have just one feature X or like just consider one feature x1 and all the values in that feature for all the day example is just number five right so what it means is that that particular that particular feature cannot be used to differentiate between example a and example C but it's all it's both for both cases it's just five right that's there there's no variation in the data that means when there is no variation in the rate of the variance will be very low.