Aller au contenu principal

## Commentaires

16 commentaires

The problem appears to be that your spiral intersects with itself which causes a non-manifold solid to be created.

Try increasing the pitch a little so that it does not intersect.

Eg. In this image, the triangle is 4mm high; and the pitch is also 4mm, which means I can successfully drag it until the first rotation meets the start position at which point it aborts.

However, if I set the pitch slightly bigger, 4.001, I can drag as many turns as I want:

You could also make the triangle very slight smaller (say 3.999mm) and leave the pitch at 4mm and it should also work.

Thanks so much Buk for your help. I will try this in the future

Self intersection / over lapping is acceptable within system limits. So a 4 mm triangle height pulls at 3.999 pitch ( 0.001 overlap) as shown without system error ( indeed, a 3.9999 pitch (overlap 0.0001) is fine to ) but at 3.99991 pitch, the overlap is too small causing an auto merging/ stitching and a program freeze.

Also, ( going the other way) the pitch has to be greater than or equal to 4.0001 for a successful pull without auto stitching etc.

This 0.0001mm isn't the system accuracy level , it's much smaller but that's another discussion altogether...

Tim said: "...but at 3.99991 pitch, the overlap is too small causing an auto merging/ stitching and a program freeze."

Unfortunately, that limit you've discovered by empirical methods -- and all other similarly discovered capricious and arbitrary limits -- is only applicable to your exact test scenario, on your version of DSM.  You don't define your radius of rotation so I cannot quote the exact limit for that exact scenario under V2 for comparison, but I know from my own previous experiments that if you change any detail, the limits change also.

And that highlights the root cause of almost every frustration with DSM; its attempts to guess what you want and "correct it"! If it simply drew exactly what you asked it to draw, and didn't try to guess if you might want things "auto-stitched" or auto-merged or auto-fing-anything, then about 99% of the problems we encounter using it would simply go away.

With IEEE-754 64-bit floating point math, with its15.955 decimal digits of precision, there should be no situation --even those incorporating lower echelons of COS() compounded with the higher echelons of SIN() -- where it is necessary to use deltas of less that 8 digits of decimal precision to determine if things overlap.

Buk.

My reply was all about what the problem was and why it occured... for general purpose things, feature sizes < 0.001 mm aren't necessary , aren't easily made or measured - usually requiring an order of magnitude lower resolution to assure accuracy.

I agree with the problem of construction and very small scales can be a big problem...if the requirement is for high accuracy in sub microns and approaching nm range, perhaps a dedicated 2D program might be more suitable.

I've tried another 2D drafting program (SolidEdge 2020) and i can't do very small geometry with that either. I assume they all work in the same manner and ranges unless you pay for a special offering - which i'm sure they can do.

I can move solids apart by 0.00000001mm  (8 dp) and get measurement results to 15 decimel digits.

Absolutely it doesn't like to model very small solids as i said earlier . Yes, it does 'grab' onto geometry key points - it's meant to be an aid  and is until you ask of it what it wasn't created for i assume...

Buk, although dragging faces or entering 'reducing pulls' can reduce solids or 2d geometry as shown...

It is just impossible to work at such small scale. As you can see, my graphics card is having trouble in producing rendered faces = probably that DSM maths.

It seems reasonable that we could set the system mm decimel point where needed if designing very small parts - i just assume the developers considered parts with features of size around 0.1 um ( 4 dp ) to be the minimum required with the math / results going to 8 dp ( user set ). Moves of 0.00000001mm are possible though.

also, with small objects like this below 0.0005 mm sided cube below, is it not necessary to have maths working down to 3 orders of magnitude lower to get accurate results?

I'm no expert on what's possible though...

You misunderstand me Tim.

This triangle:

Is stored in a file called b1xf1xe3.sa(b|t), -- meaning it contains 1 body, with 1 face and 3 edges) and is represented internally like this:

The first two lines are version information.

The first number (1000) on the third line is the dimension scaling factor. All values (regardless of what units you've chosen to view the document in) are stored in metres; and that number means I've chosen to work/view in mm, so all values are multiplied by 1000 before being displayed.

The fourth line is a checksum

The interesting stuff starts in the 5th "body" line. This describes the body in terms of pointers to subsequent entities in the file. Pointers start with a \$ and then a number; and \$-1 is a null pointer -- an unused pointer to nothing.

That line "body \$-1 -1 -1 \$-1 \$1 \$-1 \$-1 T -0.023 0 0 -0.019999999999999993 0.0040000000000000001 0 #"

says that the 'body' consists of one pointer (\$1) which refers to the next 'lump' line. And the numbers after the 'T' define its 3D bounding box in terms of its bottom-left-front corner, (X:-23mm, Y:0mm, Z:0mm) and its width ( -19.999999999999993mm) height( +4.0000000000000001mm) and depth (0mm) as offsets from the bottom-left-front corner.

But note how the inaccuracies have already started to creep in. The triangle I drew is 3/4/5mm with the vertices at (precisely, as in snap-to-grid) (-23,0,0)-(-20,0,0)-(-20,4,0). Not the weird, not-quite-right positions defined in the 3 'point' lines shown at the bottom of the file above.

Now, you may think that this is down to the limitations of the floating point math, but that is not the case. IEEE-754 floating point math    -- which is used by all current processors made, be they Intel, AMD, ARM, IBM etc -- can represent numbers in the range 10^-308 thru 10^308. with a precision of 15.955 decimal digits.

In simple terms that means 64-bit FP numbers (representing meters) can -- at the same time -- represent a distance of 1.0570824524312896405919661733615e+292 light years; and one triliion-trillion-trillionth of the size of an electron!

Those numbers in full look like this:

10570824524312900000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

and

0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000999999999999999

And with 15+ decimal digits of precision, they can represent those values accurately enough that the error is less than the thickness of 1 atom, on the distance from here to the Moon.

Even if they are using 32-bit FP, with its 7+decimal digits of accuracy -- which is quite possible as using 32-bit math on a Graphics processor speeds things up a lot -- that still means it can store the distance from here to the Moon -- in meters -- accurately to less that the thickness of a human hair.

All of which means that we should (easily) be able to draw things accurate to the nearest micron (10^-6 meters); but your own experiment shows that DSM doesn't like working to an accuracy less than a few hundred microns. 2 orders of magnitude less accurate than the math upon which it is based.

The only logical explanation I can postulate for this is that it uses just 16-bit FP math in order to speed up the graphics rendering, but even that does not explain why -20mm gets stored as -0.019999999999999993 and 4mm as 0.0040000000000000001.

The fact that I can draw something that is 0.018mm accurately if I scaled it by 10, but not if I draw it 'real size' simply doesn't make sense. The whole point of the 'float point' in floating point math, is that small things can be represented with very great precision and as they get larger, the decimal (or binary) point floats to the right, retaining the same number of digits of precision, even as those digits represent larger real-world values.

At the large end -- 1e292 lights years, a distance billions & billions of times greater than the size of the universe -- no ones cares if the value is a couple of kilometres out. But at the scale of common engineering parts, where bearing surfaces and push-fit parts are  frequently toleranced in single digit microns, (0.000001 metres) not being able to draw a feature that is 18microns wide is a joke.

As a follow-up; I'm sure that there are people wondering why I want to be able to draw things so accurately. Why not just draw a representation of them, and then annotate the dimensions and tolerances to the specified accuracy.

I have two examples of why.

1) I'm designing a novel electric motor. One of the criteria is efficiency. And one of the main sources of inefficiency in motors, are 'eddy currents'. These are electrical current induced magnetically into the metal cores of the coils, which waste energy by generating heat.

The magnitude of those eddy currents depends on the width of the metal, orthogonal to the direction of the magnetic field, in which the current is induced. This is why the cores of motors (and transformers etc.) are made of thin layers of magnet steel.

If the core is 10mm thick, and you make that up from 20 layers of 0.5mm silicon steel, you cut the magnitude of the induced current almost exactly by 20. Thus reducing your eddy current losses to 1/20th.

Historically 0.5mm steel was considered perfectly good; but given the advent of battery powered vehicles, further cuts have been sought and so motors moved to using 0.35mm and then 0,25mm laminations. However, that is about the limit to how far you can reduce the thickness of standard silicone steel.

But, about 20 years ago, a couple of companies discovered a new form of steel (Amorphous steel or Metallic Glass) with vastly superior magnet properties; that is (only) produced in thickness of 0.024mm or 0.018mm (Metglas/FineMET) which reduces the eddy currents by far more than the ~20:1 ratio implied by the reduction in thickness, because of its much higher permeability: ~1,000,000 compared to 4,000 for standard electrical steel and ~100,000 for the super-electric nickel steels like Mu-metal and supermalloy.

These factors mean that instead of a 15mm thick electric steel core made from 60x0.25mm layers:

I can (theoretically) use a 5mm thick core constructed from 200x0.24mm layers with a 1 micron epoxy-filled gap:

Whilst for ordinary engineering purposes, it isn't necessary to model this level of detail, in order to be able to run accurate magneto-static and magneto-dynamic FEA analysis, to prove the theory and quantify the energy and weight savings, it is.

2) Gearsets.

If you are using bought-in, off-the-shelf standard gearset, then there is no need to model them. Just download the (extremely crude) 3D models that the manufactures provide and add an annotation specifying the part numbers.

Problem: Electric motors are often quoted in the popular media as "producing maximum torque at zero RPM". which is a great quote and substantially true, but completely ignores the reality that Power(Watts) = Torque(N.m) * Angular velocity (RPM converted to Radians per seconds). and at 0 RPM that means no power.

A typical permanent magnet direct current motor (PMDC) as used in battery powered applications -- like cordless tools & hoovers, model planes and drones  -- typically run most efficiently at somewhere between 4,000 and 20,000 RPM; but the wheels of an e-bike or car at 30mph only need to turn at ~400 rpm. That means to match the motors most efficient speed, to the vehicles most typical running speed, you need a gearbox with a reduction ration of at least 10:1.

The problem is, that to achieve that reduction ratio typically requires a 2-stage compound planetary gearset,

and with each set of meshing gears involved in the drive chain, you loose ~3% of your input energy to friction in the gears & bearings and, oil drag. The required 2-stage planetary reduction stage plus torque splitter (differential) means you can say goodbye to at least 10% and often 20% between the motor and the road. (And if it is a 4-wheel drive setup with 3 diffs, even more.)

One of the unique aspects of my motor design is that it utilises and unusual arrangement of a pair of unequal sized bevel gears to achieve a 10:1 (or greater) reduction ratio in a single stage; keeping the transmission losses to under 3%.

However, in order for the transmission to be quiet and smooth, it requires the use of spiral bevel gears; but spiral bevel gears are less efficient that straight bevel gears because there is a certain amount of sliding contact at the beginning and end of each tooth contact before the conjugate contact of the involute profile mates up. This inefficient sliding contact can be minimised by careful design of the contact patch through profile shifting and crowning.

And that requires mechanical FEA analysis of an accurately define model.

Buk,

I understand better your meaning concerning 64-bit FP. This is outside my knowledge.

wrt your comments...' But at the scale of common engineering parts, where bearing surfaces and push-fit parts are frequently toleranced in single digit microns, (0.000001 metres) not being able to draw a feature that is 18microns wide is a joke'

Buk. I don't understand this because i have no trouble adding a 1um thickness localised ring in a sleeve situation - this was done in a sketch view making a 0.001 thickness rectangle, filling it making a surface , then pulling add revolve .

Additionally, by pulling the same thickness in a solid view, reduction to 0.1um is ok, reducing to 0.02um still ok, but at 0.01um entererd size the feature disappeared.

This restriction on minimum 'feature size' corelates with DesignSpark Options for importing...

Buk -  Sub Angstrom feature sizes possible... phew !

By altering edge length values directly in a section view, features of size 0.1 Angstrom are possible ( my picture shows 1 Angstrom)

Hm.

Tim. I'm unclear what you think all of that proves? That I'm lying? I imagined it? I made it up? (Despite that there are several threads here documenting and discussing this very limitation.)

Try this.draw a horizontal straight line say 10mm long. Select it and enter Move mode. Click Create Pattern, select the move handle orthogonal to the line hit the space bar and enter 1mm, and Enter. Modify the count to 50 and hit enter:

Now select the line tool and attempt to connect the ends of the top two lines:

Oh dear. What happened there?

And that is one of a gazillion situations where attempting to draw small features results in DSM overriding the user input and doing its own thing. which always results in not what the user wants.

Another. Two existing lines 5microns apart and I want to connect them with a straight line:

Whoops. It did it again. Maybe if I am sneaky:

Whoops again. Of course, that's exactly what I want. Not!

And these are two, of dozens of scenarios I've encountered where this happens.

And quite frankly, I don't care what excuses you choose to proffer on DSMs behalf for this behaviour -- nor do I get why you feel the need to do so -- this behaviour is wrong.

It breaks just about every rule and guideline ever written about User Interface Design and Human-Computer Interaction. I started seeking out and quoting authoritative UI design guidelines, but if you are interested you can seek them out yourself.

Tim said: "I understand better your meaning concerning 64-bit FP. This is outside my knowledge."

And that is kinda the point I was trying to make back there at the beginning of this sub-thread -- ie. the one where I responded to your first post in this thread.

Now, what I'm about to say could be taken as offensive, but if you Google for "how many programmers are there in the world?", the answer varies somewhat depending which reference you take as authoritative; but it is somewhere around, 20 million. Or, roughly 0.3% of the world population; and there is no reason anyone who is not a programmer would have knowledge of the intricacies of FP math. In reality, it is probable that less than 10% of programmers have any real understanding of it. Please bear that in mind, and the fact that I have pointed it out when I say...

When you attempt to extrapolate your findings -- on your machine; using your version of DSM; on that (miniscule) subset that you've attempted, of the myriad possible scenarios; and then imply that those findings are 'typical' or 'predominant' -- much less, 'definitive' -- you have overreached your level of understanding.

Buk.

I answer questions on DSM use , on problems that are encountered with the software - everything ever created has limitations and working best practices or indeed , works one particular way only...

Any problems like the above line connections might have a simple answer. Where there is an actual bug that i can replicate or acknowledge unknown behavour, i will raise it with RS support - or you could.directly as well. I am an ordinary user with no special rights.

Working normally, no special conditions, i cannot replicate your observations / issues with connecting lines - but i agree, this seems very very strange.I have tried many different combinations of graphic renderers, render quality, Anti-Aliasing, snap ranges (2), decimel precisions, snap to grid on/off , containers ownership, structure position etc. etc.

My last comment would be to see if you can replicate this behavour on another pc .

My system is V4, build 12131 Win10 pro Nvidia K4000 latest drivers etc, Lenovo S30, E5-1660 v2 64GB 1866 Mhz ECC ram. A high level ws 7 years ago.

Okay. Let me knock the idea that my hardware or the render cannot cope with small lines, on the head.

Using the same scenario as above where it wouldn't left me draw a 0.005mm straight line to connect two existing lines.

If I draw a (large) square with one edge that passes through the ends of those two lines, and connect their other ends; then Pull the sketch to form two surfaces and delete the square, I get a surface with a 0.005mm edge. And if I copy the 3 edges I want to the clipboard, delete the surface and paste them back, I get the 3 lines I wanted to draw:

So this isn't any physical or software limitation of my setup, but rather a deliberate decision by this version of DSM, or rather its authors, to override user input.Why, I cannot begin to hazard a guess.

BTW. You appear to have misread my parallel lines example. Your image shows that your lines are 1mm apart. In my example, I created a pattern of 50 lines in a space of 1mm, meaning they are 0.0204...mm apart.

Buk.

I can now replicate both these behavours and stop them - a bit of watching what actually happens in your gifs and an 'working practice' that seems to be of importance

So good news that it isn't your system but instead DSM quirks...

Give me a couple of minutes and i'll explain...

It seems lines, equal to or less than 0.03mm and drawn under the file name and not in a container, may not be drawn and cause adjacent lines to mysteriously move.

Buk, we have seen in other quirky behavours, working / using  containers is the answer.

I'll report this...

Thank you! That indeed works.

I guess I'll keep my thoughts on how ludicrously capricious that is to myself.

Cette publication n’accepte pas de commentaire.