跳到主内容

will reducing file size improve performance?

评论

9 条评论

  • tab...

    Hi

    I wish i had a definite answer - all i have is what i do... others may have different views backed with empirical evidence.

    The V5 implimentation of enhanced graphics could be better - it's significantly slower than the standard solid view. I understand it's there's no slowdown in V6.

    Using the Constraint Sketcher version again has a big impact - again fixed in V6 i believe.

    The standard solid view is by far the fastest for screen manipulation.

    Using the program spin / zoom tools is much more reactive than a 3D Connexion spacemouse.

    Use the alt z for quick box zooming.

    Use zoom key in combination selected items or structure browser selection for fast zooming in to the item.

    Spline face when blended / interfaced with other spline faces is a data hog. Low render quality helps here however in these situations it's important to remove these 'heavy' components out i.e rmb the component, source> convert to external. This should make a good difference with these parts.

    Generally a 'normal' external file referenced in will be slightly faster by upto ~25%

    I never work with Auto 'power select' feature on - it just takes too long for parts with multiple holes as an example. It goes through an algorithm - selecting an edge, it will compile 5 lists... eventually  Also, as a default with selection, i uncheck the 'search all bodies' box.

    Corrupted parts are also slow to respond -  Check Geometry often. File Name>RMB>check geometry will check all within.

    I'll reply more if i think of anything else.

     

    0
  • tab...

    I have to agree that only accessing 1GB of memory seems odd... I have to believe that allowing a greater amount would assist in selection lists and other graphical performance like hidden line graphics. But i am not a computer engineer. I believe DSM is not graphically optimised - i do not know about Spaceclaim.

    A 5000 hole test plate - selection of one hole face >selection takes 3 to 4 secs to group select all others. BUT, an edge of a hole selected takes 2 + minutes !

    An edge corner select of the rectangular plate also takes 3 to 4 secs despite there being only 4 edges!

    I hope my musings help.

    0
  • Me Here

    I increased my memory from 8GB to 20GB; added a fast SSD for spill file use and bought a discrete graphics card to use instead of the one built-in to my AMD APU. Whilst the fps figure from the F9 graphics test increased significantly (5x), the change in responsiveness to keyboard/.mouse actions -- selecting moving, pulling etc -- was negligible.

    Further investigation shows that DSM barely uses the GPU. Best I can tell it uses it for rendering the screen image, but not for all the floating point math it performs to do the constructive solid geometry (CSG) involved in unions, intersections etc, nor when performing the math involved in calculating hit points, extrusions etc.

    In other words, it is using the CPU's floating point serialised math instructions instead of the emabrassingly parallel capabilities of the GPU when calculating user interactions and model changes. Which sucks; but is not uncommon. Moving existing CPU-based floating point code to make effective use of a GPU is a difficult and painstaking task.

    The other problem (I believe, going by the rate at which page faults occur) is that DSM is compiled with MS Visual Studio C++, and uses its default memory allocator, which is notoriously bad at managing lots of small, quickly changing allocations; especially when the total memory requirement for the program is being increased in lots of small allocations typified by CSG algorithms. If they switched to using a different memory allocator, it would likely have a dramatic effect of the performance of the program.

    Since Spaeclaim (on which DSM is based) has been owned by Ansys for some time now; and Ansys already has a lot of somewhat competing CAD software in the catalog, I wonder how much incentive there is for them to improve SC/DSM much.

    0
  • Richard Rivait

    Thank you @ Tim Heeney and @ Me Here for your insights and guidance. It is much appreciated.

    I should have mentioned that I am using dsm v5 64bit.  I only mentioned spaceclaim because that is how dsm is listed under task manager.

    Tim Heeney  how can i turn off enhanced graphics and enable standard solid view? Is V6 available? I can only find ds pcb v6.  I don't know what a 3D Connexion spacemouse is, so I'm pretty sure i don't have one. rmb the component, source> convert to external is the step I was missing.  I was just putting my objects into components.  i was really hoping that a geometry problem would turn out to be the issue, but no luck there.  I'm not sure what you meant by "I have to agree that only accessing 1GB of memory seems odd."  Is there a 1GB limitation?  Is it adjustable?

    @ Me Here I have not noticed any page faults when running dsm.  Actually, I don't think I have encountered a page fault in years so I don't even remember how it presents.  (something vague deep in the brainbox but I cant get hold of it).  You are probably correct that there is no incentive for Ansys to make too many improvements.  They have to drive the pro users to paid product or there is no money to develop anything.  When I think of the huge sums we used to pay for much less capable product back in the dark ages I am grateful for what they've given us free.

    0
  • tab...

    Richard - i agree about whats on offer for free nowadays. 2 decades ago i was paying £5 000 , £6 000 with       £1 200 support fees / year. When Spaceclaim first came out i should have gone with them immediately but i muscled on with CoCreate OneSpace designer. Now bought by PTC, Creo Elements Direct ( a dynamic history free software) is still sold today.

    Getting back to our problem...

    Another tip - i find the DirectX renderer to be ~5% faster than OpenGL. Subjectively i prefer OpenGL for smoothness , DirectX being 'sharp', with more contrast but the 'need for speed' has DirectX as my preferred choice.

     

    0
  • Me Here

    Richard said: "I have not noticed any page faults when running dsm. Actually, I don't think I have encountered a page fault in years so I don't even remember how it presents. (something vague deep in the brainbox but I cant get hold of it)."

    Page faults aren't something you will notice unless you go looking for them and have the diagnostic tools to examine them. The aren't errors per se, but rather a normal (but time consuming) part of normal operating system (OS) virtual memory(VM) operations.

    They are hardware exceptions that occur when a process (program) attempts to access a page (4kb) of memory that has been allocated to the process, but not yet committed.

    Simplistically, when a process requests a large chunk of memory from the OS, it can choose to have that memory allocated (address space reserved) and commited (actually backed by physical memory); but that reduces the memory available for allocation to other processes; and it may never get around to using it all.

    Instead, it can choose to allocate a large amount of memory, but NOT commit it, in which case, the virtual memory allocated will not immediately be backed by physical memory. What then happens is when the process tries to access a new page of allocated but uncommited memory, a page fault occurs, the process is suspended from running until the OS finds and commits a page of physical page to back the virtual address page being accessed. The process then resumes.

    This is a great strategy as it allows many process to overcommit memory without actually running out of physical memory. The problem is, that each page fault is very costly in terms of time. The program attempts to write to a few bytes of the new page, and instead of taking a few microseconds, it takes many milliseconds. 1000 times slower.

    The problem with the MS standard C++ allocator, is that whilst it may request new memory from the OS in chunk of several megabytes, it only alocates that memory, and relies on page faults to perform the commits as the memory is accessed. This is good for letting many programs run concurrently, but requires a page fault for every 4kb of memory the progrqam expands into. Which is very costly in terms of performance.

    Many better (faster) memory allocator exist. They all tend to use strategies that commit memory in chunks larger than 4kb. Example, they may start out commiting 64kb each time a page fault occurs. thus reducing the number of page faults (and the costly pauses to the programs execution) to 1/16th. The better ones keep a record of the time between consectuive page faults, and when the program is performing operation that are commiting a lot of memory -- think DSM performing Move Pattern followed by a Combine of the replicated elements -- then it recognises this quickly and starts increasing (often doubling) the size of the chunks commited at each page fault.

    By way of example, this is a new instance of DSM with a new Deisgn started and a short 8mm, repeating section of rebar pasted into that design on the left, and the current performance statics for that process on the right:

    The two values, Peak Private bytes: 728,240 K boxed in green and Page faults: 806,685 boxed in red.

    This animation shows me using Move->Create pattern to replicate that section 300 times and then Combining them to form a single 2.4m section of rebar:

    See next post for the animation. I posted teh wrong image here and this forum won't let me add new images when editing.

     

    And these are the process statistics afterwards:

    The salient values are now Peak private bytes: 1,457,560 K  in green and Page faults: 1,367,546

    From this we can deduce that to increase memory by 729,320 K has required 560,861 page faults or roughly 1 page fault every 1.3K of allocation, which is abismal; and the performance impact is huge.

    From the wilipedia page linked above:

    Page faults, by their very nature, degrade the performance of a program or operating system ...

    Major page faults on conventional computers using hard disk drives for storage can have a significant impact on performance, as an average hard disk drive has an average rotational latency of 3 ms, a seek time of 5 ms, and a transfer time of 0.05 ms/page. Therefore, the total time for paging is near 8 ms (= 8,000 μs). If the memory access time is 0.2 μs, then the page fault would make the operation about 40,000 times slower.

    Those figures exagerate the problem as that description is only correct when the physical memory has been exceeded and swapping to disk is in effect. The effect is less when the OS only need to commit a physical page of available RAM and link it (set the TLB) to the preallocated virtual address space; but still considerably maybe 1 to 2000 times) more than a simple memory access.

    It easy to see that if each page fault allocated say 64K each time, the number of page faults would be almost 50 times less.

    If DSM was built with a different allocator -- say Hoard -- which is a simple, drop in replacement, I suspect it would substantially speed up many of the operations for almost no effort. I one project I was involved in, it cut runtime of a benchmark by 90% over the stand MS allocator.

    0
  • Me Here

    The animation:

    0
  • Richard Rivait

    Thanks again for the feedback guys.  I'll try the suggestions and some other tinkering and let you know the results.  I just ran the f9 performance test for the first time.  Oddly, it shows 4gb of memory usage.  Not 1gb like Tim and not anything higher although I have 12gb of ram.  Perhaps it references graphics memory.  I don't know my graphics specs. Pretty sure I have integrated graphics not a discrete graphics card.

    0
  • tab...

    Richard Rivait

    I now think that my performance report memory usage is incorrect. I have 4GB video memory. Windows does give an incorrect quantity video memory in certain listings. As young (ish) folk say ' my bad'.

    I have found that working with faces makes the file size slightly bigger and the fps slightly slower compared to solids.

    0

请先登录再写评论。