following: 0
followed tags: 14
followed domains: 0
badges given: 0 of 1
hubskier for: 3453 days
I'm a PhD student in a small social science lab. We've had to stop collecting data on an experiment we are in the middle of, but the nature of the experiment means we can resume data collection once (if?) things get back to normal. The director of the lab sent out an email letting everyone know that he is not smiles times. But what can you do? We just can't justify exposing research participants to a dangerous virus. Fortunately the PhDs in the lab have a lot of data from an experiment we ran last year to analyze. The PhD candidates are using this time to work on their dissertations. The other PhD student and I are focusing on coursework. Plus I'm trying to apply for scholarships to summer programs that may or may not happen. So that's fun. The big worry for my lab is timing on grants. Right not we have a grant from the Army, and next month we need to send them a report showing our progress on the research they funded. It is generally bad form to send a report that says we are not making progress. Doing that can influence your chances of getting future grants. It is not clear how their research office will deal with the COVID-19 pandemic when reviewing reports. But here's hoping they are understanding because my spot in this lab/PhD program is funded by those grants.
So we have live projections of celebrities, deep fakes, and digital manipulation of video that can be used to make it appear that a person is saying things they are not. For a long time now, celebrity personas have been more or less manufactured in boardrooms. I am imagining a world where the ownership of a personality is not clear. Maybe in 2080, an election campaign can rent a celebrity personality to endorse their candidate. Maybe the human at the center of the celebrity doesn't want to endorse that candidate. Too bad for them. The company will just fire up the projectors and point an AI at them.
"why are these people exempt from questioning?" How is Harris exempt from questioning? The Wired article is clearly critical of him and it's not like they are breaking ranks either. Lots of people have been critical of Harris and similar public intellectuals.
There is a guy on reddit doing that. He posts a summary of the Russia investigation once a week. Here is the most recent one.
Maybe this place https://www.climatechangecommunication.org Its in Virginia though. What you want to do is really niche. I doubt you can find success if you are not willing to move.
Driving home today I saw a truck tire one of the wheels of his trailer out and just keep going. A cement truck got along side him and the passenger was making a bunch of gestures, trying to get the semi driver to notice and pull over, but the dude just kept going. Watching that scene, it occurred to me that automation may not put an end to the trucker. The guy in that cabin was taking huge risks driving like that and the direct beneficiary of those risks is the company who pays him to haul. That tire probably blew out because the trailer was under-maintained. I'll bet the guy kept going because he was close to his destination and wanted to make a deadline. Freight companies can put drivers into situations where they take big risks and if those risks pay off, the company gets the reward. If they don't pay off, then they can just blame the driver. They can't do that with automation. If they under-maintain an automated truck and it crashes, they are directly, and totally, responsible. IF they program a truck to disregard the law, they are criminally responsible. No canned speech about how "we take safety very seriously" would explain away a reckless decision made by a company running an automated truck. Without a human behind the wheel, there is no reasonable doubt when it comes to culpability. Freight companies need truck drivers in the same way Wells Fargo needs tellers.
Here are some good things that could come out of this: 1: A national ID act which establishes an ID more secure than the SSN. Which is to say, an ID which has some type of security feature. 2: A law which prohibits binding arbitration as a means of avoiding civil action. 3: Data aggregation laws which specify how data aggregators must treat the data they collect, and specifies penalties and civil remedies for violating those laws.
A halfway decent Markov bot with a decent seed could have generated Trump's side of the conversation.
I don't know if Wheaton's fatalism is warranted. After all, this whole Russia thing hasn't gone away. It's been a consistent feature of reporting on the Trump administration. It doesn't seem to be getting any better for them either.
Were it any other administration, I'd say a pardon in this situation would be political suicide. But trumpets online are already arguing about how this isn't a big deal, so who knows.
Trump can always pardon him too. . .or perhaps DTjr will do some minimal time and take the heat for the whole crew
An incremental step to what? New technology brings with it the potential to cause serious social problems. I think most of the serious potential problems associated with self-driving cars have been overlooked by most people interested in automation. I think that serious consideration should be given to the negative implications of this technology. I don't really think that the potential problems which I have raised have been seriously addressed. Most of the ethical discussions surrounding self-driving cars seem to be concerned with who should get squished. When it seems to me, the questions of "Who should have control over the fleets?" and "What sort of consumer rights acquiescence is warranted?" are far bigger problems. Also, I don't think cell phones are a good example of reasonably dealing with techno-political problems. Today, cell phones are a part of the massive-passive surveillance infrastructure which governments and their corporate partners have been diligently building for the last 20 years. Very little has been done to address that and that has serious implications for stable liberal democracies.
Self-driving cars are probably going to cause serious problems for society which outweigh their benefits. The problems they can create are largely related to civil rights and liberties. The apparent business model for these things is to move from sales to service, which means: 1). The public's ability to get around will be permanently subject to the capricious whims of a corporation. 2). People's daily activities and movements will be captured in large data sets, over which they have no control. One of the things that worries me about this, is that we already see the government using corporate actors to circumvent the civil rights of citizens. (E.g., asking telecoms to store data about their customer's calls so the government can rifle through it at will.) Imagine a world where they have access to high resolution data about citizen movement. A suitably obsequious corporation could probably be convinced to deny people transportation too. Want to go downtown to participate in a protest? Sorry, the cars aren't going downtown tonight. In fact, we've decided that you are an undesirable, so we won't be allowing you in our cars any more. You don't get to object because this is a private company offering a service to you. They can withdraw service at any time. Self-driving cars, while heralded as a life saving technology, are likely to create a situation where people are completely dependent on a system which extends to them no rights.