It was so bad that companies could kick you off of insurance mid-treatment for something like cancer and then deny you for having a pre-existing condition.
It was so bad that companies could kick you off of insurance mid-treatment for something like cancer and then deny you for having a pre-existing condition.
Check out Habitat 67 in Montreal - an architectural student solved this in the 60s. Apartments where everybody gets their own rooftop terrace. Given the funding, the original plan was for a 30-story terraced hill of mixed-use and apartments in an A-frame with public green space underneath that mixed the density of apartments with the benefits of single family homes.
Since everybody thought he was crazy, he only got a fraction of the funding for what he ended up building for the 1967 World’s Fair, but those apartments have the longest occupancy time of any building in Canada (some seeing 2 or 3 generations living in them) and a 5-year waiting list on units.
Last year, a 3d model of the original concept was released for Unreal Engine: www.unrealengine.com/en-US/hillside
Also, there’s the cost and community aspect of games. For the price of a movie ticket and popcorn, I can buy a game that I can play with friends for easily dozens of hours instead of us silently sitting next to each other for an hour or two.
With the increasing death of third places and the increasing cost of existing outside, video games have become their own sort of third place for people to get together and just hang out.
On the one hand, yes, and Fandom is a blight on the internet.
On the other hand, AI like ChatGPT are wrong some 53% of the time. The fact that this is another “use nontoxic glue to keep your cheese from falling off of pizza” situation doesn’t mean that Google isn’t equally culpable for doing nothing to prevent these sorts of occurrences even when the sources are right (AI is as likely to make things up that aren’t even in its cited sources as it is to actually give you info from them).
There’s a train bridge like that in my hometown, but it’s directly over the base of a fairly steep hill. Pretty much anything bigger than a work van is likely to hit it, and I’ve seen a couple of box trucks with the top 6 inches or so of their roof peeled back like a half-open can of sardines.
Freedom of religion, but not freedom from religion! Checkmate, atheists!
/s
Like a record, baby.
If they haven’t been swayed already, this won’t do a damn thing.
There’s definitely studies on it. I don’t know how they measure them, but it’s all about the number and type of cones in your eyes because there are a few different types that see different colors. This is why tigers are orange - because their prey lack the cones that see red, so the tigers look like the rest of the background foliage.
I’ve heard that women have more cones in their eyes as well, which leads to a more nuanced sense of differentiation between colors.
Found the Republican
For perspective, one of the states in the southwest (I think New Mexico) tried to pass a similar ban and it got overruled by a judge because it was found that it would affect a total of 4 girls in the entire state, and the judge felt that that violated the federal law that says that you can’t make a law that targets specific people (ie you can’t make it illegal for Mike and Jerry specifically to join the basketball team).
Another Millennial here, so take that how you will, but I agree. I think that Gen Z is very tech literate, but only in specific areas that may not translate to other areas of competency that are what we think of when we say “tech savvy” - especially when you start talking about job skills.
I think Boomers especially see anybody who can work a smartphone as some sort of computer wizard, while the truth is that Gen Z grew up with it and were immersed in the tech, so of course they’re good with it. What they didn’t grow up with was having to type on a physical keyboard and monkey around with the finer points of how a computer works just to get it to do the thing, so of course they’re not as skilled at it.
Because we’re talking pattern recognition levels of learning. At best, they’re the equivalent of parrots mimicking human speech. They take inputs and output data based on the statistical averages from their training sets - collaging pieces of their training into what they think is the right answer. And I use the word think here loosely, as this is the exact same process that the Gaussian blur tool in Photoshop uses.
This matters in the context of the fact that these companies are trying to profit off of the output of these programs. If somebody with an eidetic memory is trying to sell pieces of works that they’ve consumed as their own - or even somebody copy-pasting bits from Clif Notes - then they should get in trouble; the same as these companies.
Given A and B, we can understand C. But an LLM will only be able to give you AB, A(b), and B(a). And they’ve even been just spitting out A and B wholesale, proving that they retain their training data and will regurgitate the entirety of copyrighted material.
Reminds me of when I read about a programmer getting turned down for a job because they didn’t have 5 years of experience with a language that they themselves had created 1 to 2 years prior.
The argument that these models learn in a way that’s similar to how humans do is absolutely false, and the idea that they discard their training data and produce new content is demonstrably incorrect. These models can and do regurgitate their training data, including copyrighted characters.
And these things don’t learn styles, techniques, or concepts. They effectively learn statistical averages and patterns and collage them together. I’ve gotten to the point where I can guess what model of image generator was used based on the same repeated mistakes that they make every time. Take a look at any generated image, and you won’t be able to identify where a light source is because the shadows come from all different directions. These things don’t understand the concept of a shadow or lighting, they just know that statistically lighter pixels are followed by darker pixels of the same hue and that some places have collections of lighter pixels. I recently heard about an ai that scientists had trained to identify pictures of wolves that was working with incredible accuracy. When they went in to figure out how it was identifying wolves from dogs like huskies so well, they found that it wasn’t even looking at the wolves at all. 100% of the images of wolves in its training data had snowy backgrounds, so it was simply searching for concentrations of white pixels (and therefore snow) in the image to determine whether or not a picture was of wolves or not.
The context that we’re talking about here isn’t somebody that you know personally and have permission from/are talking to mutual friends of. We’re talking about publicly announcing a stranger’s dead name to everybody who reads this post and the justification that it’s okay because they once had 15 minutes of internet fame from a video going viral before they transitioned. At best, it’s a paparazzi-esque invasion of privacy, and at worst, it’s straight up doxxing.
A. Hyperbole, look it up.
B. Why do you think it’s okay to dox people?
TIL the internet doesn’t understand hyperbole so long as it’s a trans person using it.
Third time’s the charm?