following: 0
followed tags: 0
followed domains: 0
badges given: 0 of 0
hubskier for: 3810 days
Calling it "Narc" at all makes me feel like whoever is speaking will end up in jail someday. The only time I've heard is used it was a threat against me by a schoolmate who didn't want me to tell the teacher about anything when I was in high school. If you call someone a snitch, a narc, or any similar term, I have zero empathy for you. Reporting people who break the law, within reason, should be encouraged promoted and congratulated.
Looks to me like your problem is that they don't put enough effort into their Mac app
We exist to produce and be slightly unhappy, and the ideals we are given of achievement are always based on where we stand today. I like to think about human beings nowadays as a species evolved to function in a society, and when you look at our traits and actions things start making a lot more sense. Consider eyesight, you'd figure having bad eyes would have killed off humanity, but we tend to be either long-distance deficient, short-distance deficient, or old. In a group where all three types exist you work together to get optimal results. Same goes for people being gay. Can't have kids, but can still contribute to society and is an extra set of hands to help their group handle kids. Alternatively they are good candidates for things like war, as they will have no attachments, or not as many as a normal group. Our various quirks and ideals, the weird kid standing in the corner looking around the room, the talkative bunch who are super friendly, the quiet sort who keep to themselves, each of these people, I believe, are driven into specialized roles in society by their genetics, despite the fact it is all human genetics. One watches out for danger, one remains isolated and forms new ideas outside of the thoughts of the group. One binds the group together and serves as a leader. It's amazing how incredibly diverse and functional human society is when you think about it. So many things just work so perfectly when all the gears fit together, and all these people who don't know what the heck they are doing outside of natural tendencies serve to create systems that end up launching rockets to the moon or dominating our entire planet. It's why eugenics was so horribly wrong. To get rid of disease, to get rid of issues in the human genome is to rid ourselves of the diversities those things bring. Autistic people may be dysfunctional, but I'll guarantee you that gene serves very important function in hundreds of people who are functional, serve very important roles in society, and likely show some of the same traits that those with various mental diseases have. Schizophrenics may be crazy and delusional, but I'll bet it takes a bit of that disconnect and paranoia to make art or analyze systems to strong degrees. I don't know, I think there are a lot of people who see mankind as this bleak, dull, thing that is headed on a crash course with destruction while we all yell, scream, and fight. In reality I think there is order to the chaos, and I tend to feel a sense of pride when I see a market crash or riots break out. The bad isn't necessarily bad. Riots destroy property, but I think they serve much more important function than that in sending information and/or making a point. A market crashing is a correction of ideas, a learning process on a grand scale. The system we live in is smarter than it looks, I think, even when it looks stupid and broken. It's why we can't let people dictate our actions and clean the world of all that is bad. The black goo over the machine isn't bad, it's oil.
This isn't really a good comparison to make. Obama was the Sanders of the last election, he just wasn't as extreme and his sandersism was mostly created by inexperience. People aren't happy they voted "Sanders" in the form of Obama and got "Clinton" instead last election, and they aren't going to be happy about it this year either.Barack Obama got the votes of 92 percent of Democrats in 2012, and she'll be in the same neighborhood.
"tech culture" started as a interesting groups of people dedicated to a new hobby that just took off. Modern tech culture seems like a bunch of overhyped idiots looking to make big money off of ideas that aren't going to work at all. I hope I can get a good programming career at a company that produces things here in the midwest before this bubble goes pop.
spam?
Okay, I have to make another comment about this, because you know that comparison about that one guy I made a bit ago?
I think you need to shift your focus a lot more towards your comments and thoughts of what you'd have thought about any of these points as a young earth creationist, how you'd have thought about obvious counterparts, and them some reasons your thoughts about the counterpoints are flawed as well. As is this seems to be more a thunerf00t style "laugh at stupid christian videos" than it is a series aimed at people who aren't yet atheists. I think that you should play more to your strengths and unique situation here. Despite my criticism, I did enjoy the video. You have a somewhat "martimer81" vibe about you at the moment. eg:
I will agree entirely that the TSA has a whole lot of excessive abuse of power and misuse of funding and resources.
The point of the rules is not to outright ban all things which can be dangerous at all, the point of the rules is to protect and ensure that nothing which is of imminent danger to an airplane is allowed onto the plane. People can cause havoc with lithium ion batteries. People can cause havoc with dry ice. The measily little pop and fire isn't going to kill people, and the smoke and warning the device gives off, the tampering required to get the battery to explode are all too difficult to cause substantial harm with. People aren't going to cause the instant death of 5 nearby people with a dry ice or a lithium ion explosion, and setting those things in motion is a very clumsy and hard to pull off sort of thing within the compartment of an aircraft. Again, and I cannot stress this enough, the people running the TSA and making policy decisions are experts at what they do. Despite that their decisions may not seem logical, I am almost certain that if you had the scope and knowledge of the TSA that the people who set these rules do then you would undoubtedly consider the policies relatively tame and reasonable.
I assume the pilot or passengers can manually trigger them, no?
If the AC goes out, and assuming the lines of oxygen masks suddenly stop working? What sort of danger would a big thing of dry ice have in those conditions? People would just put on their O2 masks and the airplane would scrub/release the CO2 from the atmosphere in time. As well, dry ice is CO2, so it would be noticable as everyone gets short of breath, and the big cloud of fog would be a massive tell as well. 3.2 oz of shampoo is an amount not enough for any of the conventionally existing gelled explosive materials to cause significant damage. The limit isn't arbitrary or stupid. http://blog.tsa.gov/2008/02/more-on-liquid-rules-why-we-do-things.html
Lithium ion batteries are required for phones and other tech to work. Secondly, the "explosions" that these things make aren't anywhere near severe or dangerous enough to damage the plane or be worthy of death. As well, it can take quite some time to get one of the things to explode in a decent fashion so far as I am aware. Cargo holds likely experience a lot of depressurization, heat, cold, and other factors. these would likely damage or cause batteries to have issues and as a result the batteries should probably be kept in a fairly safe, controlled, environment.
My point was that carrying a foot long bullet/metal container onto a plane should be something that isn't allowed. Even if they couldn't be used as a weapon directly as they were designed, a giant bullet can still do damage outside of the gun. If there is reasonable expectation that the things you mention could be used to smuggle things onto a plane, and that they are too difficult to check, they should be banned as well.
A round that large could possibly contain a lot of explosives that you'd have to open the round up to confirm are not there. So I don't blame the TSA from banning carrying aboard things like that bullet.
It is, has been, and the stuff that works well and is cheap enough to turn into a pill and distribute becomes known as "medicine".Natural medicine and Natural Health is something that should be looked into.
Nvidia thinks so: http://www.nvidia.com/object/drive-px.html Within a journalistic article like this, neural networks more than fit this definition. They are, after all, just a bunch of big arrays with a bunch of weights and activation functions.In a car brain, software, processors and an operating system need to run algorithms that determine what the car should do
Ultimately the human mind can only store so much, and human beings are more than happy to settle in a routine. People already spend 20+ years doing the same thing every day, what's to say we couldn't spend an eternity doing something like that? Do we really only enjoy life because we see or feel new things? We would lose a sense of fear only after we experience something many times, and learn it does not cause us pain, or we learn to endure the pain and treat it as normal. Ultimately fear in humans is more attuned to culture and pain than it is to fear of death. Maybe some things we would fear more, such as being buried alive in concrete, thanks to immorality. The host cannot host an eternity of life, assuming these people continue to have kids. If we control the population and outlaw those who live forever to have kids, and say that those who have had kids cannot live forever there would be no issue.
Most self driving systems today are using neural networks, I don't think expert systems would be able to manage the complex issues that pop up with driving. As a result, I don't think we could really understand all the nuances that these systems are capable of knowing once they have been trained for so long to drive. It's easy to create a network that learns to do a task relative to learning how the completed system is thinking. These aren't brute force approaches anymore, there is a system that is learning as time passes, and getting better at what it does as time continues forward. As such, these networks have one job, and do that one job very well. In this case that job is to prevent crashes. Fiddle with that network to impose artificial limitations and you impose on a system optimized to do something, and more crashes will result in the long run. Although I'm sure there are cases where things go wrong with the program, or things need tweaked, these aren't the same as directly interfering with the car when it decides to take a course of action that could lead to hitting a schoolbus vs a normal car. It may well be that hitting the schoolbus causes less total harm for some reason, and we be sure that we understand the reasoning of the machine before we decide to mess with it.
I assumed you were referring to the idea of machines needing to know "human morality" in order to replace human actions, and was saying that machines shouldn't have our morality imposed on them, they should be allowed to come to decisions naturally based on the learning algorithms they are built with. I ultimately think human drivers should be entirely replaced by machine drivers. The horse-driver analogy is interesting, but ultimately it defeats the purpose of the self driving car. I think it makes a good stopgap, though.
Why are we trying to convert human driving intelligence into machine form? Why not allow these machines and algorithms to learn the best courses of actions to take to reduce accidents without trying to impose things like "you should hit the car instead of the bus of school-children"? You've made a system that learns how to best manage situations, you don't turn around and undercut the decisions this system makes in order to appease people proposing absurd hypothetical scenarios.how do we convert human driving intelligence into machine driving intelligence?
In essence there is a class of very wealthy people who will continue making it so their kids can become wealthy as well while the rest of the world continues to lag behind. This class of people is separating from the lower classes, is dominating the political process. Highly wealthy groups with high amounts of power separated from having to see the rest of society results in highly negative outcomes in the long run, as they turn the government into something that serves them rather than the common person. This fact is resulting in the Democratic party shifting towards figures like Clinton and away from Sanders. It results in the republicans getting absolutely killed by Trump when the frustration of the working class poor arises in a party dominated by the working class poor.Geographic segregation dovetails with the growing economic spread between the top 20 percent and the bottom 80 percent: the top quintile is, in effect, disengaging from everyone with lower incomes.
The top quintile is equipped to exercise much more influence over politics and policy than its share of the electorate would suggest.
Equally or perhaps more important, the affluent dominate the small percentage of the electorate that makes campaign contributions.
But the separation is not just economic. Gaps are growing on a whole range of dimensions, including family structure, education, lifestyle, and geography.
Reeves cites data showing that 56 percent of heads of households in the top quintile have college or advanced degrees, compared to 34 percent in the third and fourth quintiles and 17 percent in the bottom two quintiles.
A Democrat whose wallet tells him he is a Republican is unlikely be an strong ally of less well-off Democrats in pressing for tax hikes on the rich, increased spending on the safety net or a much higher minimum wage.
This is a pretty great/well written/comprehensive comment.
This is why I said that insurance companies should be able to charge more for things like smoking (they can), and without insurance in the US you have to pay for your treatment anyways. It is currently illegal in the US to not have insurance, so not being able to pay high costs shouldn't be a significant issue. As well, It is well within the governments right to charge for medicaid if they wanted to punish smokers for smoking, or people for living unhealthy lifestyles. A tax is not needed.
Neural networks work by having information used as a set of numbers, weights, and functions. They don't really output "numbers" or "strings" or "arrays" they almost always have numbers in, numbers out, then the numbers are converted into whatever meaning they should need. For example, a network that identifies age from an image will take in numbers as R G B values across a massive array and use a big set of arrays and functions to narrow those down to a single number that slowly is "optimized" until it starts matching somewhat accurate examples. So a network that deals with strings may just break that string down into characters and have a node for each character. A network that deals with images uses a node for each RGB value. So if you have a network that was working with words in one type of sentence it will be used to having one region lighting up meaning one thing. However, if the sentence is of a new type then the network will have that meaning light up, but it will not mean what it should. That's the sort of issue with different datatypes, not so much arrays or ints or doubles or anything of that sort. Say you can have a sentence This is a great time. This is a horrible time. And you train a neural network on those. Now you input a sentence. This time was a great one The area that normally lights up in response to "great" is now staying dim with the input of "was", and will say that the sentence is not positive. It was trained in an environment where great or horrible always appear in the same place, and assumes that fact. It will mess up when given a different input, and will be useless in those cases.My thinking is that those parts would still need to be capable of type conversion of some sort, given that the X module might output, say, strings while Y gives away arrays
It's essentially what AI are doing right now, massive amounts of trial and error to hope the AI finds some solution that makes sense and goes along with it. The AI is an "optimizer" in that it continuously tries to find a better solution rather than thinking through the problem as a person might with a big set of previous experience. We just kind of hope it stumbles on the right answer with a whole lot of repetition and a bit of smart direction-picking. The problem is that two sets of data can look very different, with very different inputs, and it's almost impossible to make a single program that can deal with more than one or two "types" of data. Also we have to have a massive set of meaningful data in order to do something like this, which is not something easy to come by. That's the problem with neural networks is that it has a set of inputs then a set of nodes that represent the connections and ideas from those inputs. Put a new set of info from a different situation in and those nodes are going to suddenly become meaningless. I think the big thing here will have to be an AI that can manage other AI that deal with sets of data. One whose job is to create/identify information and find the best program to solve that problem. And all this training is expensive and hard to do, as well. Neural network/machine learning rigs are set up with a whole bunch of high end graphics cards, and eat up a whole lot of processing power. The bigger the network the more exponentially costly it is to run it. You can kind of see this problem having been solved in the brain as well, where it has billions of neurons but only a few percent are used at a time for any individual input. The brain doesn't even exist in a space where it has to have a processor go through each neuron to simulate it before it can get some sort of output.in fact, it may be up to chance for us (or the self-improving AI) to do so right now, because neuroscience is nowhere near that far in explaining our thinking.
Do you think it's possible to create an AI that's whole purpose would be to crunch through data (with the reward being more data connections made)?
I never mentioned the impact the taxes have on the poor, it's not really a huge concern for me. If the government wants to help the poor they can do so through extended, strong, social security nets, good education policy, and a strong progressive tax system. If the poor are doing something that is harmful to themselves then making them stop through tweaking the system properly will not hurt them in the long run, it will help them. Externalities are harms done to others by an actor. When a person drinks sugary drinks they are deciding to perform some action to themselves, it is not the case of the sugary drink companies producing things that harm others who have no control over the situation. People can, even if addicted to a substance, choose to stop using sugary drinks. You can't chose to or not to be effected by carbon emissions, you can choose to drink or not drink sugary drinks. The government should focus on ensuring people have that choice through information campaigns (informative ones, not propaganda ones). lung cancer through second hand smoke is a negative externality of cigarettes. Lung cancer in smokers is not. In this way the statement would be that smokers, who choose to smoke near other people should be responsible for the damages they do to them. In fact, they are, through higher insurance payments and high cigarette taxes. We can solve obesity without resulting to excessive taxes, but through setting up the system to function properly in the first place.I am unconvinced that the health consequences of consuming so many high fructose drinks are not an externality.
Human beings act based on their environment, we don't do things for no reason. In this case, sugary drinks are popular for a reason, it is that reason we need to think about and combat, not attempt to stop people from reacting to the forces against them with taxes. In this case we need to end the subsidies for farms and crops, to make it so that people pay true costs for their sugary drinks, and to allow the healthcare system to charge extra money for an unhealthy lifestyle. These are natural systems that adjust themselves over time and dynamically react to the forces on society. A tax sticks around for ages, funnels money away from where it should be going, and sticks around long after it is needed. Carbon taxes are a different category of things, in that we have an action that is an externality. The emission of carbon HAS to be cleaned up and offset, and companies today are NOT paying that price. The point of a carbon tax would be specifically to offset the damage done by a harmful activity which is a fairly constant and ever-present issue in society. There will never be a time that a carbon tax is not a good thing, and it corrects a balance in the economy that cannot be fixed otherwise. The government isn't paying for our healthcare, it needs to stay out of taxing us for the choices we make about our health. If it would like to charge those on medicare for drinking sugary drinks, than it is well within it's right to. If it is involved in investing in and pushing for the offset and cleanup of carbon emissions, than it is within it's right to charge for it.Why is it wrong for the government to encourage individual behavior through taxes and subsidies?
I'm partially talking out of my ass here, just casual observation of things I've seen online rather than expert knowledge. Take my words with a grain of salt. Normally people assume computers are big predictive machines that solve a problem 100% or 0%. A computer does it's job perfect, or it doesn't do the job at all. People imagine an intelligent computer to be this super-machine that is fare superior to human knowledge, super logical, and all that other stuff. But with things like neural networks and other machine learning systems you can see the machine slowly develop much like a kid might, making really funny decisions that make sense if you are the machine, having a subjective set of knowledge it is acting from, but doesn't really make sense in the grand scheme of things. For example, one of these programs being trained to navigate a maze might find out that you can hug the right wall to get to the end, and be satisfied with that solution. No attempt to find the optimal, the program lazily grabs the first strong solution and follows that one every time, because trying to take a new path is more harmful than not. Sound familiar? This is stuff the AI designer has to actively tweak variables to fight against, making sure the machine has to experiment every once in a while, or making it so that following the same path over and over again starts to result in a negative or non-reward for the AI. Bored yet? Neural networks often have to be reduced to a certain size when being trained, otherwise the network literally sets itself up to memorize everything in the data-set you train it on rather than actually learning to solve the problem. It's called over-fitting in an official setting. http://www.mathworks.com/help/nnet/ug/improve-neural-network-generalization-and-avoid-overfitting.html?s_tid=gn_loc_drop Memorizing everything is the best solution to the problem, just not the one we want. One example of an algorithm acting weird is in those "teaching a bot to play NES games" video. There are examples where it will jump of a big ledge because doing so lets it move right for a while, which the program considers a "good" thing. However it then dies, but apparently the "good" best action kept getting selected for a long time until the program figured out a new way of doing things.
I think the future of programming lies in the same place the future of the motor does. A quick era of massive expansion followed by the technology hitting a fairly impassable brick wall and our ideals about it's possibilities becoming sensible. There are still people out there that believe it is possible to create a technological singularity where a computer can become infinitely smart by improving itself. This is like people pre-relativity thinking we will travel instantly to anywhere if we keep making vehicles faster. Ultimately we've had hyper efficient, complex, thinking machines around for years, easily produced, and constantly finding ways to improve their capabilities. They just demand food, water, and payment, and aren't great at menial tasks. So we make something that is, and we are finding that with things like neural networks and machine learning, as computers become as smart and capable as humans are they are losing that "ultra efficient" trait and start making the same stupid mistakes, generalities, and creative failures that humans are. We will reach a day where the computer is nothing more than a device we use for things. Today it is a cultural and technological icon of the present power of technology, tomorrow computers will be a fact of life like the telephone, the atom bomb, or the car.