Thursday 4 April 2013

Post Five

Big O, Omega, Theta, Halt and reduction

    These were by far my favorite topics in the course. Big O, Omega and theta especially because it managed to tie in everything from the course up until that point and tie it into computer science. Not that the prior knowledge wasn't useful or important in its own right but it was great being able to tie it all so neatly into something more distinctly computer science related.

    The interesting thing about the proof for why a function Halt does not exist was, at least for me, counter intuitive at first glance. Though as I began to understand it better it started to make more sense. Reduction was also somewhat confusing at first because the function that you are trying to show cannot exist is irrelevant. Eventually though it clicked for me that the important thing is what we know about Halt and not the function we are, in a given situation, trying to show is not computable.  That is, if we can implement Halt by assuming the given function is computable then by contradiction it cannot be computable (since we know Halt is not computable).

Post Four

Term Test Two

    I thought this test was well written and even easier than the first. That being said I still managed to get, what I consider the easiest of the three questions given, wrong. My main mistake was fairly simple, I completely disregarded the negation of the statement mid proof and made an assumption for q rather than very simply showing its existence, q being the first q in the statement. My reasoning behind it at the time was that the value of this q in and of itself was irrelevant in the overall proof. That is, I did not need to directly use it when showing there are no q2, q2 being the other q in the statement, such that m= … But, as should be obvious to most readers, my opinion of the values relevancy is in itself irrelevant. Because I was, in the negation of the original statement, claiming that there existed a q I was thereby obligated to clearly show its existence regardless of its usefulness or obviousness. On the bright side though, I can certainly say that I'll never make the same mistake again.

    Another interesting thing about my test is the answer I gave for the third question. Mine was different than sample solution provided. That is because I decided to, out of comfort with the method itself, prove the statement by contradiction rather than directly. Looking back, the direct proof is only, at least in my case, marginally easier. To prove the given statement by contradiction I simply showed that if floor(x) + 7 <= x + 6 then there exists a z, in this case floor(x) + 1, that proves the negation of the definition of floor which is a contradiction. The reason I say that the solution provided gives only a marginally easier proof is because I encountered a statement similar to this one where contradiction was used in the proof prior to the test so I already knew exactly what to do to prove it by using contradiction.

Post Three

Products of Sums

1) Understanding the problem

What is the problem at hand? Given n, a positive integer, find the list, whose sum is n, with the largest product.

Some sample data:

n
Lists
P
1
[1]
1
2
[1,1], [2]
2
3
[1,1,1], [1,2], [3]
3
4
[1,1,1,1], [1,1,2], [2,2], [3,1], [4]
4
5
[1,1,1,1,1], [2,2,1], [3,2]...
6
6
[1,1,1,1,1,1], [2,2,2], [4,2], [3,2,1], [3,3]...
9
7
[1,1,1,1,1,1,1], [2,2,2,1], [4,2,1], [3,3,1], [3,2,2], [3,4]...
12
8
[1,1,1,1,1,1,1,1], [2,2,2,2], [4,4], [3,3,2]...
16
9
[1,1,1,1,1,1,1,1,1], [2,2,2,2,1], [4,4,1], [3,2,2,2], [3,4,2], [3,3,3]...
27
(I found some more lists for higher n values but this is enough to see what's happening)

One thing that we need to strive for here is to process this data into a general form in order to satisfy the problem at hand easily given any integer n.

2) Devising a plan

So, where do we go from here? Well we have all this data, we might as well put it to use. From what I could gather the commonality between the lists with the largest product is that they are all made up of, exclusively, the numbers 2, 3 and 4. But since 2+2=4 and 2*2=4 we can reduce the 4's and say the lists are all made up of 2 and 3. We also note that 3's seem to be the optimal choice as long as choosing a 3 does not mean a 1 will be in the list.

Our plan then is to generalize and see just how much 3's we can get out of some number n without having a remainder of 1.

3) Carry out the plan

We can look at the remainder of dividing some number by 3 and approach this on a case by case basis.
  1. n % 3 = 0
  2. n % 3 = 1
  3. n % 3 = 2
  1. The list is made up of n/3 threes.
  2. The list is made up of (n-4)/3 threes and 1 four (or 2 twos)
    This is because when n % 3 = 1 then (n-4) % 3 = 0. We add 4 to the list to compensate for subtracting. This method allows us to get the most 3's out of n without having a 1 in the final list.
  3. The list is made up of floor(n/3) threes along with the remainder, 1 two.
So given some integer n we can easily find the list we want by following these three cases.

Tuesday 26 February 2013

Post Two

Opinion on Proofs

    Now that we've learned the basics we've moved on to proving implications. Though proofs can be challenging they're what makes this course interesting for me. In my opinion the content we learned prior to starting proofs was, although challenging in its own right, rather dull. Luckily now instead of learning about symbols and their relation to one another we're using them to gain a better understanding of a given implication, through either proving or disproving it.

    The reason I find proofs interesting is also one of the reasons I enjoy computer science. It's because writing a program is in and of itself akin to writing a proof. You implement the basics of a given language in a logical manner in order to reach a specific goal much like how we're implementing logical symbols to reach a specific conclusion about an implication. This requires some creativity and is more engaging than simply memorizing facts.

Proof Outline

    My initial reaction to the proof outline was that it is redundant and unnecessarily contrived. This is because when writing down a proof you already understand the value of its structure may go unappreciated. But it's important to keep in mind that you're not writing out the proof solely for your present self but rather for anyone who may read it in the future. With that in mind things like the scope of parts of your proof become vital and that's why a proper proof outline is so important.

Saturday 9 February 2013

Post One

    This post is fairly late but at least it's better than nothing.

    So far I've found this course interesting. This is mainly due to the fact that I have never learned how to express statements precisely using symbolic logic. Now, of course, this isn't meant to replace or nullify the usefulness of expressing yourself in English but rather help you remove ambiguity from that which you wish to convey. The ability to do this is especially pertinent use for computer scientists, probably why the course is a CSC course. I say this mainly because when programming, any misunderstanding, however small, about your task may have certain consequences. Perhaps these consequences are mistakes easily reversible, or not even an issue in the overall scheme of a project, but they may also be extremely detrimental as well. Hence this is why being able to communicate using symbolic logic statements, or far more practically being able to recognize and remove ambiguity from an English sentence, will help programmers decrease the possibility of wasting time writing useless code.

    Now onto the discussion of course content so far.

    Easy to learn, hard to master is how I'd describe the course content. This is because we've only had to learn a handful of simple symbols that have been basis of all our course work. The difficulty of the course comes from how we use those symbols. This is actually the reason that I like the course; it's also why I like computer science. In a given programming language, for example, it isn't, generally, too difficult to learn and understand the basic 'tools' (i.e. methods calls, if's, loops, etc.) it has to offer. The challenge arises from creating and understanding complex structures from those 'tools'. So far the only 'tools' of this class are the following: 

Implies ⇒ ⇔
Not ¬
And ∧
Or ∨
For All ∀
There Exists ∃
Union ∪
Intersection ∩
Subset ⊂
Member of ∈

    Fairly simple right? One month in and we have a test with nothing more than this? That's because the difficultly of the course does not come from the 'tools' themselves but rather the things we build with them. And as we've progressed through the first month the things we've been building and interpreting have become more and more complex. Maybe some examples of this will make things more interesting and clear.

One of the first statements using symbols we learned is probably this:
    ∀ x ∈ X, P(x)

    This is quite simple, it's stating that all x in the set X are true for P(x). But we've moved far beyond this simple structure and have implemented the 'tools' available to us to create complex statements such as the following:

    ∀x ∈ X, (P(x) ∧ ∃y ∈ X, (W(y, x) ∧ (∀w ∈ X, W(w, x) ⇒ E(w, y)))) ⇒ E(x, MegaMoth)

    The meaning of the statement isn't all that relevant (for those curious though it is the statement from Assignment 1, Question 3, Part c) but rather the complexity of the structures we're creating is what I want to convey. I find it fascinating that with only a few symbols to work with we can convey complex statements so elegantly and precisely. Things get even more complex when you learn that you can phrase, so to speak, the symbols in different ways while retaining the same meaning. For example the fact that the following two are equivalent statements:

    ∀ x ∈ X, P(x) ⇒ Q(x)
    ∀ x ∈ X, ¬P(x) ∨ Q(x)

    Being able to phrase symbolic statements differently becomes useful when building statements that have certain meanings relative to another. That is a statements contrapositive, negation, converse, etc. Even nuances in the positioning of symbols in a given statements can add another level of complexity to the what we can state. For example, the next two statements mean two completely different things:

    ∀ x ∈ X, ∃y ∈ Y, E(x, y)
        This is saying that for every x there is at least one y, where no y has to be the same, such that...
    ∃y ∈ Y, ∀ x ∈ X, E(x, y)
        On the other hand this is saying there exists at least one y, the same y, for every x such that...

    So to sum up, my point is that though the basic 'tools' we have at our disposal may be easy to grasp the challenge in the material arises what they can create. We must understand the nuance of certain structures, the equivalence and relative meaning of different statements and the translation of more detailed English sentences into complex symbolic logic statements.

    On that note I think this is a good spot to end my first post. I don't see much relevance in discussing the material we've learned up to this point more specifically since the class has just wrote test one so I wouldn't say it's all that relevant at the moment. Besides that's what the lecture notes are for, right? I'd rather make these posts more than just recaps of what we've done.