• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • h34d@feddit.detoProgrammer Humor@programming.dev5/5 stars
    link
    fedilink
    English
    arrow-up
    17
    ·
    9 months ago

    Dev Home is a new control center for Windows providing the ability to monitor projects in your dashboard using customizable widgets, set up your dev environment by downloading apps, packages, or repositories, connect to your developer accounts and tools (such as GitHub), and create a Dev Drive for storage all in one place.

    • Use the centralized dashboard with customizable widgets to monitor workflows, track your dev projects, coding tasks, GitHub issues, pull requests, available SSH connections, and system CPU, GPU, Memory, and Network performance.
    • Use the Machine configuration tool to set up your development environment on a new device or onboard a new dev project.
    • Use Dev Home extensions to set up widgets that display developer-specific information. Create and share your own custom-built extensions.
    • Create a Dev Drive to store your project files and Git repositories.

    https://learn.microsoft.com/en-gb/windows/dev-home/








  • Both the popular article linked in the op as well as the actual paper seem to use the terms “liberal/conservative” and “leftist/rightist” interchangeably. Quote from the paper:

    It is necessary to note that, first, similar to previous studies on this topic that consider the left–right dimension equivalent to the liberal–conservative dimension (Fuchs and Klingemann, 1990; Hasson et al., 2018), throughout this paper, the terms leftist and liberal (and similarly, rightist and conservative) were used interchangeably. The liberal–conservative dimension is often used in the United States, whereas the left–right dimension is commonly used in Europe and Israel (Hasson et al., 2018).

    There were “only” 55 participants, but I assume that if some of them identified as socialist, they would already be included under “leftist/liberal” for the purpose of the study.


  • Thanks for giving additional explanation. I was trying to keep my reply relatively short and agree with most of what you said.

    Although the article is behind a paywall (which is somewhat strange in cosmology, but I digress), you can check other articles by the same author that also use the “varying constants” framework, for example https://arxiv.org/abs/2201.11667. His framework is that the speed of light c, the Planck constant h, the Boltzmann constant k and the Gravitational constant G depend directly on time, or to be more precise, on the expansion factor of the universe.

    Thanks for the arxiv link. I was aware that some people did stuff like this (time-varying fundamental constants), but the abstract only speaking of “coupling constants” made me think of Λ (and G), not fundamental constants. There are some theories that motivate a varying speed of light, for example (Hořava–Lifshitz gravity comes to mind), but this doesn’t seem to be motivated by any theory in particular, as far as I can tell. I also agree with you that it seems quite weird to give c, h, and k a time dependence each, only to then have them all be functions of G.

    Since this is a time-dependent change, there is no real way to significantly test the hypothesis (unlike the energy-dependent changes).

    I’m not sure if I fully agree with this. Shouldn’t varying c, h, and k with time clearly change any observable related to the dispersion of light and gravitational waves, or black body radiation (among many other things)? And if we had access to even just one of those from different times during cosmological evolution (where the change should be much larger than between a few decades in the present), we should in principle be able to check if the proposed scaling law holds quite easily. Of course, the author could always make the variation with time small enough to avoid contradicting experiment (which would make it indeed unfalsifiable in practice), but that seems to go against the main idea of using these time-varying fundamental constants to explain some aspects of cosmological evolution. My guess now would be that the paywalled paper modifies the relation between redshift and time to undo the “damage” done by modifying the constants. Nevertheless, it wouldn’t surprise me much if this kind of scaling is already ruled out implicitly by some data, as I can’t imagine it not affecting a lot of different observables, but maybe I’m also overestimating the experimental cosmological data available at present, or the strength of the variance the author proposes.


  • According to my understanding, yes. For example, it is usually assumed that there was a period of time shortly after inflation when matter was in a quark-gluon plasma, which would imply a larger strong coupling than today, since a small strong coupling is associated to confinement. There was also the electroweak-epoch, during which the electromagnetic and weak interactions were unified, and the corresponding gauge bosons were massless. The masses of the W and Z bosons can thus also be regarded as time-varying, as well as the electron charge. However, it should be noted that these changes are not all that significant on the cosmological scales under investigation here (e.g. the quark epoch ended at about 10-6 seconds after the big bang, which is much much less than the age of the universe, and it’s assumed that it still took quite a while before the first stars formed). A time-varying cosmological constant could potentially be much more relevant (and some quantum gravity theories even predict it), and I’ve heard some people suggesting it as a potential solution for the H0 tension. However, I unfortunately can’t access the paper and assess what precisely the author did there, and whether it is in any way similar to what I just mentioned.


  • He says he has a new way of describing light where it loses energy over time (something weird) and so it explains redshift.

    From what I understand, the main idea behind tired light isn’t particularly weird, it’s just that scattering could potentially lead to a redshift as well. The issue is that if you assume enough scattering to explain cosmological redshift you would also get some other effects, which are however not observed. This basically ruled out the original tired light theory by Zwicky from the beginning. The author of this paper seems to try to get around that by combining a smaller amount of “tired light” with time-varying couplings. Unfortunately the paper is behind a paywall and I can’t tell any more details.

    He also says universal constants can change (something never observed before that would fundamentally change physics)

    No, he says that coupling constants (not sure if that is what you mean by “universal constants” or not) can change, which is a generic consequence of the RG and has in fact been observed in nature (e.g. electron charge or strong coupling, to name just the most famous examples). From a QFT perspective, the cosmological constant is also a coupling, and several quantum gravity theories do in fact generically predict or suggest a time-varying cosmological constant. So this part by itself isn’t really that out there, nor that original for that matter. However, since I can’t access the paper I can’t judge whether the author’s way of varying Λ is reasonable or just a way to fit the data without any physical motivation, and I don’t really know what the article means by “he proposes a constant that accounts for the evolution of the coupling constants”.

    and he can explain dark matter

    That seems like a more grandiose claim to me, if accurate. Do you have a source for where the author claims that? Although he wouldn’t be the first to do so.

    I’m pretty sure this guy isn’t toppling physics today as the bar is set high for whatever evidence he is sharing.

    I think this can be said for a lot of popular science article with topics like this. However, in many cases the blame can lie more with the pop-sci journalists who are looking for a cool story and might over-interpret the author’s claims (I guess “physics toppled!!!11” sounds more interesting than “some guy suggests that some data might be fitted in a slightly different way”). Although in this case at least the age of the universe claim does seem to come from the author.

    Edit: Judging by another article of the author someone else linked me to further down, it seems that while the author does speak of coupling constants, he really does refer to time-varying fundamental constants. So I must agree with the previous poster on this, it does seem quite a bit more out there than I had originally assumed.





  • Isn’t “have” either an auxiliary verb or verb and “of” a preposition?

    Yes.

    Are these acceptable? If yes, why? If not, why not?

    No, because you constructed them by merely replacing the verb “have” by the preposition “of” in situations which have nothing to do with “of” after “should”/“would”/“could”. I’m not sure what point you’re trying to make, since neither I nor the people I cited ever claimed that this should work in the first place. The claim of in particular the author of the first paper I cited is that for some speakers there seems to be a novel construction modal verb + “of” + past participle, not that the preposition “of” has the same function as “have” in this case or in any other (in this case, the novel construction as a whole would have more or less, but not entirely the same function as modal verb + “have” + past participle, but “of” would still be just a preposition).

    I don’t know man, Oxford Dictionary (click Grammar Point to expand) says that […] it is definitely considered wrong in standard English.

    Yes, it certainly is considered wrong in standard English, but the interesting thing is that in some non-standard dialects there might be genuinely a novel grammatical construction which actually uses the preposition “of”. I mean, you don’t need to find that interesting, but I do. And if that is indeed the case, it would mean that the speakers of those dialects are not making a purely orthographic mistake like when people confuse “they’re” and “their”, for example, but are rather speaking or typing in their dialect.


  • the reason “in some dialects of English native speakers really do say ‘should of’ etc” is phonetics.

    What the author of the first link claims (and the second link explains in a more accessible way), is that it’s not just that for everyone. Like some native speakers really do say “of” sometimes, even when it’s stressed and doesn’t sound like “'ve” at all. So for them it wouldn’t just be a spelling mistake, but a different grammatical construction.

    last I checked it was never added to the dictionary

    Some dictionaries (e.g. Merriam-Webster) actually do list “of” as an alternate spelling of “have” (not in the sense of a genuinely different grammatical construction though).
    Obviously it’s not considered standard by anyone, which is also why teachers call it incorrect, who (should) teach the standard dialects.

    Language of course is living and ever changing, but the line must be drawn somewhere lest we devolve into shouting and grunts like neanderthals

    Language changes whether you and I like it or not, and efforts to stop that from happening are generally unsuccessful. You can also rest assured that a simple change in what is considered correct grammar or spelling (which, as far as I know, nobody has been suggesting in this case so far, but it seems like that would be the “worst-case” scenario from your perspective) would not lead to us or language “devolving”. Also, while we don’t know anything precise about how Neanderthals spoke, most likely they sounded more or less like us and did not communicate by “shouting and grunts”.



  • That’s not how linguistics works though. If people (native speakers) speak like that, it’s “correct” or normal for their dialect. This doesn’t mean it’s “correct” in whatever is considered the “standard” dialect of the language (for English, there isn’t one single standard, but de facto there are standard dialects in the English speaking countries which are taught in school and typically used in the news, newspapers etc.). But from a linguistic perspective, both “I have seen it.” and “I seen it.” are equally “correct” (linguists typically don’t use that term in this context, rather something like “grammatical”), they just represent different dialects of English.