wriedmann wrote: I have not tested that, but in my experience (and I do a LOT of COM interaction between X# modules and VO applications) the COM interface is not very fast (and cannot be very fast because there is a lot of code and a lot of conversions involved).
Hi Wolfgang,
gut reaction hints at following my second programming mantra: "Chunky, not Chatty" when it comes to calling across layers, as such layers sometimes have realistc physical borders - in this case the marshalling code. I am pretty certain that your first example done across COM into Stringbuilder, would be slower - at least at first / for strings not really large. The second example, first concatenating lots of tiny strings into intermediate, then doing 1 large append - there the benefit of not tasking memory managment with large discarded memory areas might be better as target string is in multi-megabyte range.
In vfp we have similar issues, typically when string sizes rize above 10K and memory allotment is set for small VM. Typical response is similar to your second way (as we have no StringBuilder type), although often with the twist of not only adding small strings into 1 string, but a small array of strings, which then can be concatenated in 1 line
laTmp = "" && setting all elements to 1 start value is nice in this context
for lnRun = 1 to 7
*-- build laTmps
next
lcLargeString = lcLargeString + laTmp[1] + laTmp[2] + laTmp[3] + laTmp[4] + laTmp[5] + laTmp[6] + laTmp[7]
as the slow part is not the concat of one or more strings, but the release of previous var, claiming new memory and assigning the total of right side of the line. Can be seen by measuring: as lcLargestring grows, adding strings of identical length gets slower as lcLargeString grows.
But easiest way (even if going against "RAM is always faster" reflex) is to open a buffered low level file and just appending the new strings until result is finished. If needed loading them once with FileToStr() for further processing is often faster than always memcpying it around in process space, as all internal memory allotment and garbage collection is sidestepped until final load.
Unixoid behaviour makes sense there and is even easier to code and read. (Noticed that xSharp does not differentiate between buffered or unbufferef LLF, but as buffered was/is vfp default behaviour, probably xSharp LLF implementation defaults to buffered as well. Question already raised on GIT)
That was true on old HD, and SSD improved write throughput as well.
regards
thomas