Skip to main content

Posts

Showing posts from October, 2018

5.6 due October 31

The hardest part of this section is the amount of stuff it covers. It makes sense, but trying to apply it is nontrivial, and remembering the formulas seems daunting. I think it's fun to see how math can find ways to apply the same principles to either discrete or continuous situations. And while the general principles are the same, the application is slightly unique. (It reminds me of trying to apply principles from conference to my unique situation.)

5.4 due October 26

This section was hard because of how much material it covered. And the fact that P(X=a) actually means P(X^(-1)(a)) tangles me up if I'm not careful. Stats is funny. It either seems like counting (sometimes in a counterintuitive way) or crazy definitions and formulas like this. It's either seems deceptively simple or needlessly complicated.

5.3 due October 24

The hardest part of this section for me is recognizing when I need to think carefully about probability. Once I do, it makes sense and I can figure it out. But the automatic assumption is still not quite accurate. I do like how much understanding the surprising parts of probability help us make better decisions. I find that satisfying.

5.2 due October 22

The notation in the chain rule was confusing to me. I'm guessing P(E,F) is the same thing as P(E∩F) but I'm not quite sure. I like this section. I think I need more practice counting still, but it generally makes sense.

5.1 due October 19

The trickiest part of this for me is keeping the counting straight. It can be hard to know if I'm thinking about the problem right when I'm figuring out what to count. Probability seems really mathematical and useful, and yet choosing how to look at the problem and how to count everything up seems like something of an art.

4.5 due October 15

The hard part of this section was the way it brings up things I only kind of know about like stochastic problems or NP hard problems. While I follow the ideas in the section, I'm left feeling like I only kind of get the whole thing. How did anyone ever prove that the knapsack problems are NP hard?

4.4 due October 12

This section made sense. At first the average word length didn't make sense, but the example helped a lot. (I kept trying to have it be the average length of an encoded word, not the average length of an encoded letter.) It was really nice to read this section that was mostly words after reading the 344 section that was mostly symbols.

4.2 due October 10

The hardest thing for me in this section is following all the details of actually implementing a search. The general "go here, then here, then here" makes sense, but keeping track of all the stacks and dictionaries and lists is what takes me the most time. I liked the cartoon example of what a depth first search would look like in real life. It both made me laugh and made me think.

4.1 due October 8

I understand the theory of this section I think, but I'm not sure how well that translates to being able to use it in practice. I can follow the example code in the book, but I don't think I could write it yet. The coin problem is a little funny to read because I do it so differently in my head. You just use as many of the biggest coins as you can and then repeat for the next size down until you get to pennies. It works with our money system, but it baffles me it doesn't work for some others. And both ways to find the optimal number of coins, while good, seem overkill. (They aren't, because they would work with any money system. They just seem like they are because all of my experience says they are.)

3.4 due October 5

The trickiest thing for me in this section is understanding the implications of what we are learning. I can read and understand most of the section but I don't always catch or remember how that applies when I'm programming. It has been fun to see how the math applies in the real world and makes a difference. I like that.

3.3 due October 3

I don't understand where the exponent h/2 came from in the proof of theorem 3.3.10. That last inequality doesn't make sense to me. I'm surprised that the way to balance the trees still maintains O(log n) complexity. I understand why, but it seems like reordering regularly to keep things "nice" would lose the gain we get for searching. But it doesn't. That is impressive.