• Killing_Spark@feddit.de
        link
        fedilink
        arrow-up
        19
        ·
        1 year ago

        Only to 2^54. The amount of integers representable by a long is more. But it can and does represent every int value correctly

        • parlaptie@feddit.de
          link
          fedilink
          arrow-up
          5
          arrow-down
          6
          ·
          1 year ago

          *long long, if we’re gonna be taking about C types. A long is commonly limited to 32 bits.

          • Aux@lemmy.world
            link
            fedilink
            arrow-up
            15
            ·
            1 year ago

            C is irrelevant because this post is about Java and in Java long is 64 bits.

          • voxel@sopuli.xyz
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            1 year ago

            you should never be using these types in c anyway, (u?)int(8/16/32/64)_t are way more sane

    • fsxylo@sh.itjust.works
      link
      fedilink
      arrow-up
      27
      ·
      1 year ago

      Also because if you are dealing with a double, then you’re probably dealing with multiple, or doing math that may produce a double. So returning a double just saves some effort.

    • pomodoro_longbreak@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      Yeah it makes sense to me. You can always cast it if you want an int that bad. Hell just wrap the whole function with your own if it means that much to you

      (Not you, but like a hypothetical person)

      • RoyaltyInTraining@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        A double can represent numbers up to ± 1.79769313486231570x10^308, or roughly 18 with 307 zeroes behind it. You can’t fit that into a long, or even 128 bits. Even though rounding huge doubles is pointless, since only the first dozen digits or so are saved, using any kind of Integer would lead to inconsistencies, and thus potentially bugs.

    • karlthemailman@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      How does that work? Is it just because double uses more bits? I’d imagine for the same number of bits, you can store more ints than doubles (assuming you want the ints to be exact values).

        • karlthemailman@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          1 year ago

          No, I get that. I’m sure the programming language design people know what they are doing. I just can’t grasp how a double (which has to use at least 1 bit to represent whether or not there is a fractional component) can possibly store more exact integer vales than an integer type of the same length (same number of bits).

          It just seems to violate some law of information theory to my novice mind.

          • whats_a_refoogee@sh.itjust.works
            link
            fedilink
            arrow-up
            8
            ·
            1 year ago

            It doesn’t. A double is a 64 bit value while an integer is 32 bit. A long is a 64 bit signed integer which stores more exact integer numbers than a double.

            • LeFantome@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              Technically, a double stores most integers exactly ( up until a certain value ) and then approximations of integers of much larger sizes. A long stores all its integers exactly but cannot handle values nearly as large.

              For most real world data ranges, they are both going to store integers exactly.

            • karlthemailman@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              I don’t think that’s possible. Representing more exact ints means representing larger ints and vice versa. I’m ignoring signed vs. unsigned here as in theory both the double and int/long can be signed or unsigned.

              Edit: ok, I take this back. I guess you can represent larger values as long as you are ok that they will be estimates. Ie, double of N (for some very large N) will equal double of N + 1.

            • karlthemailman@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I agree with all that. But I’m talking about exact integer values as mentioned in the parent.

              I just think this has to be true: count(exact integers that can be represented by a N bit floating point variable) < count(exact integers that can be represented by an N bit int type variable)

          • Akagigahara@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I would need to look into the exact difference of double vs integer to know, but a partially educated guess is that they are referring to Int32 vs double and not Int64, aka long. I did a small search and saw that double uses 32 bits for the whole numbers and the others for the decimal.

            • karlthemailman@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Yeah, that was my guess too. But that just means they could return a long (or whatever the 64 bit int equivalent in java is) instead of an int.

              • Akagigahara@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Okay, so I dug in a bit deeper. Doubles are standardized as a 64 bit bundle that is divided into 1 signed bit, 11 exponetioal bits and 52 bits for decimal. It’s quite interesting. As to how it works indepth, I probably will try to analyze a bit conversion if I can try something

          • towerful@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I’m going to guess here (cause I feel this community is for learning)…
            Integers have exactness. Doubles have range.
            So if MAX_INT + 1 is possible, then ~(MAX_INT + 1) is probably preferable to an overflow or silent MIN_INT.

            But Math.ceil probably expects a float, because it is dealing with decimals (or similar). If it was an int, rounding wouldn’t be required.
            So if Math.ceil returned and integer, then it could parse a float larger than INT_MAX, which would overflow an int (so error, or overflow). Or just return a float

          • nile@sopuli.xyz
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Oh now I get what you mean, and like others mentioned, yeah it’s more bits :)

  • jayrhacker@kbin.social
    link
    fedilink
    arrow-up
    71
    arrow-down
    4
    ·
    1 year ago

    It’s the same in the the standard c library, so Java is being consistent with a real programming language…

      • Glome@feddit.nl
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        Java has many abstractions that can be beneficial in certain circumstances. However, it forces a design principle that may not work best in every situation.

        I.e. inheritance can be both unnatural for the programmer to think in, and is not representative of how data is stored and manipulated on a computer.

        • whats_a_refoogee@sh.itjust.works
          link
          fedilink
          arrow-up
          10
          ·
          1 year ago

          You don’t have to use inheritance with Java. In fact, in most cases it’s better that you don’t. Practically all of the Java standard library doesn’t require the use of inheritance, same with most modern libraries.

          On the contrary, I think inheritance is a very natural way to think. However, that doesn’t translate into readable and easy to maintain code in the vast majority of the cases.

          I am not sure what you mean by how it’s stored or manipulated on a computer. A garbage collected language like Java manages the memory for you. It doesn’t really care if your code is using inheritance or not. And unless you’re trying to squeeze the last drops of performance out of your code, the memory layout shouldn’t be on your mind.

            • Von_Broheim@programming.dev
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              People hating on Java because “inheritance” usually don’t know the difference between inheritance and polymorphism. Stuff like composition and dependency inversion is black magic to them.

        • lightsecond@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          We’re gate-keeping the most mainstream programming language now? Next you’ll say English isn’t a real language because it doesn’t have a native verb tense to express hearsay.

        • kaba0@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          And it is not forced at all. Noone holds a gun to your head to write extends. “Favor composition over inheritance” has been said as a mantra for at least a decade

        • LeFantome@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I do not like Java but this is a strange argument. The people that invented Java felt that most of the C language should be wrapped in unsafe.

          Opinions can vary but saying Java is not a real language is evidence free name calling. One could just as easily say that any language that does not allow you to differentiate between safe and unsafe baheviour is incomplete and not a “real” language. It is not just the Java and C# people that may say this. As a C fan, I am sure you have heard Rust people scoff at C as a legacy language that was fine for its day but clearly outclassed now that the “real” languages have arrived. Are you any more correct than they are?

        • kaba0@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Memory is an implementation detail. You are interested in solving problems, not pushing bytes around, unless that is the problem itself. In 99% of the cases though, you don’t need guns and knives, it’s not a US. school (sorry)

  • korstmos@kbin.social
    link
    fedilink
    arrow-up
    51
    arrow-down
    1
    ·
    1 year ago

    Doubles have a much higher max value than ints, so if the method were to convert all doubles to ints they would not work for double values above 2^31-1.

    (It would work, but any value over 2^31-1 passed to such a function would get clamped to 2^31-1)

      • whats_a_refoogee@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        To avoid a type conversion that might not be expected. Integer math in Java differs from floating point math.

        Math.floor(10.6) / Math.floor(4.6) = 2.5 (double)

        If floor returned a long, then

        Math.floor(10.6) / Math.floor(4.6) = 2 (long)

        If your entire code section is working with doubles, you might not like finding Math.floor() unexpectedly creating a condition for integer division and messing up your calculation. (Have fun debugging this if you’re not actively aware of this behavior).

      • korstmos@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Because even a long (64-bit int) is too small :)
        A long can hold 2^64-1 = 1.84E19
        A double can hold 1.79E308

        Double does some black magic with an exponent, and can hold absolutely massive numbers!

        Double also has some situations that it defines as “infinity”, a concept that does not exist in long as far as I know (?)

    • parlaptie@feddit.de
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      But there’s really no point in flooring a double outside of the range where integers can be represented accurately, is there.

  • Marek Knápek@programming.dev
    link
    fedilink
    arrow-up
    37
    ·
    1 year ago

    Makes sense, how would you represent floor(1e42) or ceil(1e120) as integer? It would not fit into 32bit (unsigned) or 31bit (signed) integer. Not even into 64bit integer.

      • 1rre@discuss.tchncs.de
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        I feel this is worse than double though because it’s a library type rather than a basic type but I guess ceil and floor are also library functions unlike toInt

  • maniacal_gaff@lemmy.world
    link
    fedilink
    arrow-up
    32
    arrow-down
    1
    ·
    1 year ago

    It would be kinda dumb to force everyone to keep casting back to a double, no? If the output were positive, should it have returned an unsigned integer as well?

    • baseless_discourse@mander.xyz
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      I think one of the main reason to use floor/ceilling is to predictably cast a double into int. This type signature kind of defeats this important purpose.

      I don’t know this historical context of java, but possibly at that time, people see type more of a burden than a way to garentee correctness? (which is kind of still the case for many programmers, unfortunately.

  • Rev@ihax0r.com
    link
    fedilink
    English
    arrow-up
    27
    ·
    1 year ago

    python is like this also. I don’t remember a language that returned ints

  • Aelorius@jlai.lu
    link
    fedilink
    arrow-up
    11
    arrow-down
    5
    ·
    1 year ago

    Logic, in math, if you have a real and you round it, it’s always a real not an integer. If we follow your mind with abs(-1) of an integer it should return a unsigned and that makes no sense.

    • Kogasa@programming.dev
      link
      fedilink
      arrow-up
      25
      arrow-down
      1
      ·
      1 year ago

      in math, if you have a real and you round it, it’s always a real not an integer.

      No, that’s made up. Outside of very specific niche contexts the concept of a number having a single well-defined type isn’t relevant in math like it is in programming. The number 1 is almost always considered both an integer and a real number.

      If we follow your mind with abs(-1) of an integer it should return a unsigned and that makes no sense.

      How does that not make sense? abs is always a nonnegative integer value, why couldn’t it be an unsigned int?

      • Aelorius@jlai.lu
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        I’m ok with that, but what I mean is that it makes no sense to change the type of the provided variable if in mathematics the type can be the same.

  • MrGeekman@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Try Math.round. It’s been like ten years since I used Java, but I’m pretty sure it’s in there.

  • Bappity@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    25
    ·
    1 year ago

    the programming language Java meaning coffee is perfect because, like coffee, it tastes like shit but gets the job done

    • MrGeekman@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      I think you need to try some lighter-roasted, higher-quality beans which were roasted fairly recently and only grind them a day or so before you use them. There are also different brewing methods and coffee/water ratios that you can try.

      • mestari@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I love and consume lots of coffee but I sincerely believe it only tastes good because I associate the taste with the boost it gives. Exactly like cigarettes taste tolerable, good even, when you smoke them regularly.

        • MrGeekman@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          If you don’t know what you’re drinking, it’s probably dark roast. Dark roast is like charcoal compared to light roast.

          The coffee most folks use (i.e. Folgers, Maxwell House) is low-quality coffee made in haste to keep the price low enough for folks to be willing to buy it. They only offer darker roasts because disguise the inferior nature of the beans, or rather the inferior process. The unfortunate truth is that good coffee costs more to process because it takes longer to process and most folks don’t want to spend that much on coffee. So, you get what you pay for.