Implement Gibbs sampling as described on Bishop p. 543, modified to compute P(z_i | e), where z_i is an assignment to a single variable, and e is an assignment to some set of variables. An assignment is a mapping from variables to values. Note that the algorithm in the book samples from the joint distribution. Instead, to sample from the conditional distribution P(z_i | e), do the following: Change the initialization (line 1 of the algorithm) so that it initializes the variables in e to have the values in e. Also, in step 2 of the algorithm, do not resample values for the variables in e -- leave them alone. This can be seen as "clamping" the variables in the evidence set e. You can choose how many samples to use and the length of the burnin period.
Part of the algorithm involves computing the probability distribution of a variable given an assignment of values to the set of all the other variables, P(z_i | {z_\i}). This probability can be computed using the equation on p. 382, replacing the integration with summation since we are dealing with discrete random variables. Below the equation, they discuss a more efficient way of doing this that involves the concept of a Markov Blanket, but you can ignore that for the homework and just do it the "brute force" way, by directly implementing the equation.