Why can't 10 be divided by 3 perfectly?

I subscribe to Netflix. :)
:rimshot:

I love math, and wish I had kept up with it after college.
Same here. I took Pre-Calc during my senior year in High School and aced it, but then after taking a semester off before I started college, I ended up taking Pre-Calc my first semester to refresh myself on the material. I took Calculus I my next semester, but after realizing I didn't need any more math classes for my major, I decided to stop. But I really love math and kinda regret not continuing.
 
Yeah, I should have continued it too. It has pretty much vacated from my mind.
 
My old CS chair worked or went to school with a guy who's math doctorate was about studying different levels of infinity.

I love math, and wish I had kept up with it after college. One thing I did learn is that most people's math intuition and beliefs how it works is usually wrong.

I hate math with every fiber of my being.

It is my life's goal to destroy math. It is also my life's goal to destroy the sun. But that's unrelated to this thread.
 
You've a better chance of destroying the sun...
 
Did the sun touch you as a child? Why else would you have that goal?
 
As evidenced by this thread apparently.

Don't be so quick to judge. For example:

The part in red just seems to have a misunderstanding of how math works, and I did answer the green part with 10/3 inches.

Here's something to try, and I'm not sure if this will work because I'm not sure how exacting photoshop software is (and I don't have it anymore). Create a 12 inch ruler or rectangle in photoshop, cropped so that the available space is exactly 12 inches across, and line it up with their ruler lines, then make a mark at 4 inches, and at 8 inches. Next, scale the whole image so that it is ten inches wide, and if photoshop is really exact, it should scale all the lines accordingly, and the marks you made will split the now ten inch ruler into three equal parts.

This is not necessarily true. It depends on so many things. The number of pixels in the image and monitor, the floating point precision of the processor of the particular machine, even the type of image in question. Because a bitmap image can only go down to specific pixel, while a vector image can be zoomed into indefinitely (until the machine runs out of precision).

I'd bet that most of time time, those lines would fall on either side of what would be 10/3, but not actually on the point that would be 10/3. The only way it would work is if the total number of pixels happened to be divisible by 3. But, the size of the pixel changes depending on the resolution of a monitor (hence the measurement pixles/inch).
 
I hate math with every fiber of my being.

It is my life's goal to destroy math. It is also my life's goal to destroy the sun. But that's unrelated to this thread.

You could only destroy the sun...by using math! :D
 
Don't be so quick to judge. For example:



This is not necessarily true. It depends on so many things. The number of pixels in the image and monitor, the floating point precision of the processor of the particular machine, even the type of image in question. Because a bitmap image can only go down to specific pixel, while a vector image can be zoomed into indefinitely (until the machine runs out of precision).

I'd bet that most of time time, those lines would fall on either side of what would be 10/3, but not actually on the point that would be 10/3. The only way it would work is if the total number of pixels happened to be divisible by 3. But, the size of the pixel changes depending on the resolution of a monitor (hence the measurement pixles/inch).
I did say I wasn't sure how exacting photoshop was, so I wouldn't call that a misunderstanding of math.
 
I did say I wasn't sure how exacting photoshop was, so I wouldn't call that a misunderstanding of math.
I'm pretty sure there's no way that a man made physical thing is going to be accurate to an infinitesimal level, though.

Technically, you could create a program that would adjust and round up/down on a level which would be dependant on the accuracy of it's hardware, but it's impossible to physically express a recurring decimal on it's own terms.
 
I'm pretty sure there's no way that a man made physical thing is going to be accurate to an infinitesimal level, though.

Technically, you could create a program that would adjust and round up/down on a level which would be dependant on the accuracy of it's hardware, but it's impossible to physically express a recurring decimal on it's own terms.
I don't see why a computer couldn't calculate and designate things internally on an accurate level, especially if it can calculate in all different bases, but yeah it wouldn't be able to display it accurately due to the pixel thing you mentioned in your previous post.
 
I don't see why a computer couldn't calculate and designate things internally on an accurate level, especially if it can calculate in all different bases, but yeah it wouldn't be able to display it accurately due to the pixel thing you mentioned in your previous post.

Because a computer only has a certain number of bits to work with. And it's impossible to represent an arbitrarily long decimal in binary beyond a certain precision (based on the number of bits).

Besides, if it's an infinitely long number, a computer would never finish calculating the end. They've calculated pi to trillions of decimal places, and it just keeps going.

Computers are just fast, but aren't special. There's no difference from a computer calculating all the digits of 1/3 to a person writing them out, except the computer will do it faster.
 
I'm pretty sure there's no way that a man made physical thing is going to be accurate to an infinitesimal level, though.

Technically, you could create a program that would adjust and round up/down on a level which would be dependant on the accuracy of it's hardware, but it's impossible to physically express a recurring decimal on it's own terms.


Eventually you'll get down to an atom. And while you can technically split an atom, it's not a atom any more. Which sort of defeats the purpose of cutting up an 10 inch ruler. The number of atoms lined up in the 10 inches would have to happen to be divisible by 3 for it to work.


Mathematicians have been discussing the nature of non-terminating decimals (and infinity in general) for thousands of years.
 
Because a computer only has a certain number of bits to work with. And it's impossible to represent an arbitrarily long decimal in binary beyond a certain precision (based on the number of bits).

Besides, if it's an infinitely long number, a computer would never finish calculating the end. They've calculated pi to trillions of decimal places, and it just keeps going.

Well I'm no computer expert, that's for sure, but if it's a rational repeating decimal rather than an irrational number like pi, wouldn't it be easier for the computer to deal with? If our brains can work with fractions conceptually, it seems like we out to be able to design software that can work similarly, rather than working it solely with decimals. Maybe that's how skynet takes over though.

Eventually you'll get down to an atom. And while you can technically split an atom, it's not a atom any more. Which sort of defeats the purpose of cutting up an 10 inch ruler. The number of atoms lined up in the 10 inches would have to happen to be divisible by 3 for it to work.


Mathematicians have been discussing the nature of non-terminating decimals (and infinity in general) for thousands of years.
Next up for the large hadron collider, smashing two rulers together.
 
Well I'm no computer expert, that's for sure, but if it's a rational repeating decimal rather than an irrational number like pi, wouldn't it be easier for the computer to deal with? If our brains can work with fractions conceptually, it seems like we out to be able to design software that can work similarly, rather than working it solely with decimals. Maybe that's how skynet takes over though.

That's not how computers work. Everything is in binary and only has a certain level of precision. There's no difference between pi and 1/3. They both have an infinite number of decimals. Doing fractions like this in software all involves a balance between approximation and programming trickery. Which is like how we deal with fractions conceptually. We don't actually compute them to the last digit, we just 'know' how to work with them. That's why we use 1/3 instead of .33333....
 
That's not how computers work. Everything is in binary and only has a certain level of precision. There's no difference between pi and 1/3. They both have an infinite number of decimals. Doing fractions like this in software all involves a balance between approximation and programming trickery. Which is like how we deal with fractions conceptually. We don't actually compute them to the last digit, we just 'know' how to work with them. That's why we use 1/3 instead of .33333....

Computers are losers then.
 
Computers are just dumb, fast machines that only do what they're programmed to do.

Which arguably puts them one notch above people.
 
I disagree. Computers are rapidly becoming very impressive machines. In many ways they are much smarter than a human (try playing chess with a computer lol) but they're (ironically) too logical for their own good. That is the only thing that differentiates us from computers; we are not always logical which is an advantage for the world we live in.
I think in 50 years time, or maybe even less, we will see computers that are virtually indistinguishable from their human counterparts.
 
I think in 50 years time, or maybe even less, we will see computers that are virtually indistinguishable from their human counterparts.

tumblr_l99cfdDUTZ1qzfdu3.jpg
 
I disagree. Computers are rapidly becoming very impressive machines. In many ways they are much smarter than a human (try playing chess with a computer lol) but they're (ironically) too logical for their own good. That is the only thing that differentiates us from computers; we are not always logical which is an advantage for the world we live in.
I think in 50 years time, or maybe even less, we will see computers that are virtually indistinguishable from their human counterparts.

Being good at chess doesn't mean the computer is smart. It means it's fast. Well programmed, but fast. They can sift through data a lot faster than people, but computers don't think. AI isn't there yet. 50 years, maybe But right now, they still only do things exactly the way they've been programmed.
 
Being good at chess doesn't mean the computer is smart. It means it's fast. Well programmed, but fast. They can sift through data a lot faster than people, but computers don't think. AI isn't there yet. 50 years, maybe But right now, they still only do things exactly the way they've been programmed.
Well I think its very subjective and depends on your definition of intelligence. I think in many ways we are exactly like computers. The brain processes information to make sense of the world. The only thing that seperates us, as I said earlier, is that we are not necessarily bound by the rules of logic and can think outside the box. You may call it intelligence but I call it creativity (and it is only a piece of the whole puzzle when we talk about intelligence). As far as other aspects of intelligence, such as lateral thinking (ie chess), computers excel and wipe the floor with us. So I personally don't think computers are dumb, I think think they are bound by the rules of logic which prevents them from ever having any true creativity and that is why many would say a computer isn't 'smart' or it only follows direction when in reality, without creativity, we are just really advances biological computers anyway.
 
There's an interesting documentary about the singularity on Netflix (not sure if it's still there, and I can't remember the title), but it talked about projections by the world's smartest people saying we'll create an AI sometime around 2030 that will be able to create and improve it's own versions of AI so rapidly that essentially any issue or problem will come to be solvable almost instantly. The scary part being that it might not conclude that humans are necessary or "good" anymore at that point. I'd recommend trying to track that down, it was interesting.

Also, I like where this thread is going, this is a much more interesting discussion.
 
Humans are sentient. They are self-aware and can contemplate their own existence. Computers can't do that yet. They just run through lines of code, moving 1's and 0's around. There's no independent thought. There's no ability to work outside the programming because computers can only run the programs as they are written.

It's very easy to anthropomorphize computers because we've programmed many of them to give us outputs in ways humans can understand. We've even built them the ability to talk and move like people. We've made them human friendly. But there's no thought there. Modern computers really aren't any different than the old punch card computers of 50 years ago. Just smaller and faster.
 

Users who are viewing this thread

Back
Top
monitoring_string = "afb8e5d7348ab9e99f73cba908f10802"