Why is this code 6.5x slower with optimizations enabled?Unit Testing C CodeWith arrays, why is it the case that a[5] == 5[a]?Why doesn't GCC optimize a*a*a*a*a*a to (a*a*a)*(a*a*a)?Why are elementwise additions much faster in separate loops than in a combined loop?What is “:-!!” in C code?Why is my program slow when looping over exactly 8192 elements?Obfuscated C Code Contest 2006. Please explain sykes2.cWhy does the C preprocessor interpret the word “linux” as the constant “1”?Why does GCC generate 15-20% faster code if I optimize for size instead of speed?How is the linking done for string functions in C?

When blogging recipes, how can I support both readers who want the narrative/journey and ones who want the printer-friendly recipe?

Can I make popcorn with any corn?

XeLaTeX and pdfLaTeX ignore hyphenation

Book about a traveler who helps planets in need

Why Is Death Allowed In the Matrix?

How long does it take to type this?

How is it possible for user to changed after storage was encrypted? (on OS X, Android)

Draw simple lines in Inkscape

Compute hash value according to multiplication method

How to get the available space of $HOME as a variable in shell scripting?

How is this relation reflexive?

Do Phineas and Ferb ever actually get busted in real time?

Copycat chess is back

Do airline pilots ever risk not hearing communication directed to them specifically, from traffic controllers?

Why is the design of haulage companies so “special”?

Shell script can be run only with sh command

Pronouncing Dictionary.com's W.O.D "vade mecum" in English

How do I create uniquely male characters?

Is it possible to do 50 km distance without any previous training?

Motorized valve interfering with button?

Theorems that impeded progress

N.B. ligature in Latex

Why are 150k or 200k jobs considered good when there are 300k+ births a month?

A Journey Through Space and Time



Why is this code 6.5x slower with optimizations enabled?


Unit Testing C CodeWith arrays, why is it the case that a[5] == 5[a]?Why doesn't GCC optimize a*a*a*a*a*a to (a*a*a)*(a*a*a)?Why are elementwise additions much faster in separate loops than in a combined loop?What is “:-!!” in C code?Why is my program slow when looping over exactly 8192 elements?Obfuscated C Code Contest 2006. Please explain sykes2.cWhy does the C preprocessor interpret the word “linux” as the constant “1”?Why does GCC generate 15-20% faster code if I optimize for size instead of speed?How is the linking done for string functions in C?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








7















I wanted to benchmark glibc's strlen function for some reason and found out it apparently performs much slower with optimizations enabled in GCC and I have no idea why.



Here's my code:



#include <time.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>

int main()
char *s = calloc(1 << 20, 1);
memset(s, 65, 1000000);
clock_t start = clock();
for (int i = 0; i < 128; ++i)
s[strlen(s)] = 'A';

clock_t end = clock();
printf("%lldn", (long long)(end-start));
return 0;



On my machine it outputs:



$ gcc test.c && ./a.out
13336
$ gcc -O1 test.c && ./a.out
199004
$ gcc -O2 test.c && ./a.out
83415
$ gcc -O3 test.c && ./a.out
83415


Somehow, enabling optimizations causes it to execute longer.










share|improve this question
























  • With gcc-8.2 debug version takes 51334, release 8246. Release compiler options -O3 -march=native -DNDEBUG

    – Maxim Egorushkin
    2 hours ago












  • Please report it to gcc's bugzilla.

    – Marc Glisse
    2 hours ago











  • Using -fno-builtin makes the problem go away. So presumably the issue is that in this particular instance, GCC's builtin strlen is slower than the library's.

    – David Schwartz
    2 hours ago











  • It is generating repnz scasb for strlen at -O1.

    – Marc Glisse
    2 hours ago












  • @MarcGlisse and for -O2 and -O3, it's loading and comparing the chars as integers. Unfortunately, the naive -O0 uses the library function which uses vector-instructions that beat this optimization easily.

    – EOF
    1 hour ago


















7















I wanted to benchmark glibc's strlen function for some reason and found out it apparently performs much slower with optimizations enabled in GCC and I have no idea why.



Here's my code:



#include <time.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>

int main()
char *s = calloc(1 << 20, 1);
memset(s, 65, 1000000);
clock_t start = clock();
for (int i = 0; i < 128; ++i)
s[strlen(s)] = 'A';

clock_t end = clock();
printf("%lldn", (long long)(end-start));
return 0;



On my machine it outputs:



$ gcc test.c && ./a.out
13336
$ gcc -O1 test.c && ./a.out
199004
$ gcc -O2 test.c && ./a.out
83415
$ gcc -O3 test.c && ./a.out
83415


Somehow, enabling optimizations causes it to execute longer.










share|improve this question
























  • With gcc-8.2 debug version takes 51334, release 8246. Release compiler options -O3 -march=native -DNDEBUG

    – Maxim Egorushkin
    2 hours ago












  • Please report it to gcc's bugzilla.

    – Marc Glisse
    2 hours ago











  • Using -fno-builtin makes the problem go away. So presumably the issue is that in this particular instance, GCC's builtin strlen is slower than the library's.

    – David Schwartz
    2 hours ago











  • It is generating repnz scasb for strlen at -O1.

    – Marc Glisse
    2 hours ago












  • @MarcGlisse and for -O2 and -O3, it's loading and comparing the chars as integers. Unfortunately, the naive -O0 uses the library function which uses vector-instructions that beat this optimization easily.

    – EOF
    1 hour ago














7












7








7








I wanted to benchmark glibc's strlen function for some reason and found out it apparently performs much slower with optimizations enabled in GCC and I have no idea why.



Here's my code:



#include <time.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>

int main()
char *s = calloc(1 << 20, 1);
memset(s, 65, 1000000);
clock_t start = clock();
for (int i = 0; i < 128; ++i)
s[strlen(s)] = 'A';

clock_t end = clock();
printf("%lldn", (long long)(end-start));
return 0;



On my machine it outputs:



$ gcc test.c && ./a.out
13336
$ gcc -O1 test.c && ./a.out
199004
$ gcc -O2 test.c && ./a.out
83415
$ gcc -O3 test.c && ./a.out
83415


Somehow, enabling optimizations causes it to execute longer.










share|improve this question
















I wanted to benchmark glibc's strlen function for some reason and found out it apparently performs much slower with optimizations enabled in GCC and I have no idea why.



Here's my code:



#include <time.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>

int main()
char *s = calloc(1 << 20, 1);
memset(s, 65, 1000000);
clock_t start = clock();
for (int i = 0; i < 128; ++i)
s[strlen(s)] = 'A';

clock_t end = clock();
printf("%lldn", (long long)(end-start));
return 0;



On my machine it outputs:



$ gcc test.c && ./a.out
13336
$ gcc -O1 test.c && ./a.out
199004
$ gcc -O2 test.c && ./a.out
83415
$ gcc -O3 test.c && ./a.out
83415


Somehow, enabling optimizations causes it to execute longer.







c gcc glibc






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 2 hours ago









Fei Xiang

2,1634822




2,1634822










asked 2 hours ago









TsarNTsarN

3815




3815












  • With gcc-8.2 debug version takes 51334, release 8246. Release compiler options -O3 -march=native -DNDEBUG

    – Maxim Egorushkin
    2 hours ago












  • Please report it to gcc's bugzilla.

    – Marc Glisse
    2 hours ago











  • Using -fno-builtin makes the problem go away. So presumably the issue is that in this particular instance, GCC's builtin strlen is slower than the library's.

    – David Schwartz
    2 hours ago











  • It is generating repnz scasb for strlen at -O1.

    – Marc Glisse
    2 hours ago












  • @MarcGlisse and for -O2 and -O3, it's loading and comparing the chars as integers. Unfortunately, the naive -O0 uses the library function which uses vector-instructions that beat this optimization easily.

    – EOF
    1 hour ago


















  • With gcc-8.2 debug version takes 51334, release 8246. Release compiler options -O3 -march=native -DNDEBUG

    – Maxim Egorushkin
    2 hours ago












  • Please report it to gcc's bugzilla.

    – Marc Glisse
    2 hours ago











  • Using -fno-builtin makes the problem go away. So presumably the issue is that in this particular instance, GCC's builtin strlen is slower than the library's.

    – David Schwartz
    2 hours ago











  • It is generating repnz scasb for strlen at -O1.

    – Marc Glisse
    2 hours ago












  • @MarcGlisse and for -O2 and -O3, it's loading and comparing the chars as integers. Unfortunately, the naive -O0 uses the library function which uses vector-instructions that beat this optimization easily.

    – EOF
    1 hour ago

















With gcc-8.2 debug version takes 51334, release 8246. Release compiler options -O3 -march=native -DNDEBUG

– Maxim Egorushkin
2 hours ago






With gcc-8.2 debug version takes 51334, release 8246. Release compiler options -O3 -march=native -DNDEBUG

– Maxim Egorushkin
2 hours ago














Please report it to gcc's bugzilla.

– Marc Glisse
2 hours ago





Please report it to gcc's bugzilla.

– Marc Glisse
2 hours ago













Using -fno-builtin makes the problem go away. So presumably the issue is that in this particular instance, GCC's builtin strlen is slower than the library's.

– David Schwartz
2 hours ago





Using -fno-builtin makes the problem go away. So presumably the issue is that in this particular instance, GCC's builtin strlen is slower than the library's.

– David Schwartz
2 hours ago













It is generating repnz scasb for strlen at -O1.

– Marc Glisse
2 hours ago






It is generating repnz scasb for strlen at -O1.

– Marc Glisse
2 hours ago














@MarcGlisse and for -O2 and -O3, it's loading and comparing the chars as integers. Unfortunately, the naive -O0 uses the library function which uses vector-instructions that beat this optimization easily.

– EOF
1 hour ago






@MarcGlisse and for -O2 and -O3, it's loading and comparing the chars as integers. Unfortunately, the naive -O0 uses the library function which uses vector-instructions that beat this optimization easily.

– EOF
1 hour ago













1 Answer
1






active

oldest

votes


















5














Testing your code on Godbolt's Compiler Explorer provides this explanation:



  • at -O0 or without optimisations, the generated code call the C library function strlen

  • at -O1 the generated code uses a simple inline expansion using a rep scasb instruction.

  • at -O2 and above, the generated code uses a more elaborate inline expansion.

Benchmarking your code repeatedly shows a substantial variation from one run to another, but increasing the number of iterations shows that:



  • the -O1 code is much slower than the C library implementation: 32240 vs 3090

  • the -O2 code is faster than the -O1 but still substantially slower than the C ibrary code: 8570 vs 3090.

This behavior is specific to gcc and the glibc. The same test on OS/X with clang and Apple's Libc does not show a significant difference, which is not a surprise as Godbolt shows that clang generates a call to the C library strlen at all optimisation levels.



This could be considered a bug in gcc/glibc but more extensive benchmarking might show that the overhead of calling strlen has a more important impact than the lack of performance of the inline code for small strings. The strings on which you benchmark are uncommonly large, so focusing the benchmark on ultra-long strings might not give meaningful results.



I updated the benchmark for smaller strings and it shows similar performance for string lengths varying from 0 to 100 at -O0 and -O2 but still a much worse performance at -O1, 3 times slower.



Here is the updated code:



#include <stdlib.h>
#include <string.h>
#include <time.h>

void benchmark(int repeat, int minlen, int maxlen)
char *s = malloc(maxlen + 1);
memset(s, 'A', minlen);
long long bytes = 0, calls = 0;
clock_t clk = clock();
for (int n = 0; n < repeat; n++)
for (int i = minlen; i < maxlen; ++i)
bytes += i + 1;
calls += 1;
s[i] = '';
s[strlen(s)] = 'A';


clk = clock() - clk;
free(s);
double avglen = (minlen + maxlen - 1) / 2.0;
double ns = (double)clk * 1e9 / CLOCKS_PER_SEC;
printf("average length %7.0f -> avg time: %7.3f ns/byte, %7.3f ns/calln",
avglen, ns / bytes, ns / calls);


int main()
benchmark(10000000, 0, 1);
benchmark(1000000, 0, 10);
benchmark(1000000, 5, 15);
benchmark(100000, 0, 100);
benchmark(100000, 50, 150);
benchmark(10000, 0, 1000);
benchmark(10000, 500, 1500);
benchmark(1000, 0, 10000);
benchmark(1000, 5000, 15000);
benchmark(100, 1000000 - 50, 1000000 + 50);
return 0;



Here is the output:




chqrlie> gcc -std=c99 -O0 benchstrlen.c && ./a.out
average length 0 -> avg time: 14.000 ns/byte, 14.000 ns/call
average length 4 -> avg time: 2.364 ns/byte, 13.000 ns/call
average length 10 -> avg time: 1.238 ns/byte, 13.000 ns/call
average length 50 -> avg time: 0.317 ns/byte, 16.000 ns/call
average length 100 -> avg time: 0.169 ns/byte, 17.000 ns/call
average length 500 -> avg time: 0.074 ns/byte, 37.000 ns/call
average length 1000 -> avg time: 0.068 ns/byte, 68.000 ns/call
average length 5000 -> avg time: 0.064 ns/byte, 318.000 ns/call
average length 10000 -> avg time: 0.062 ns/byte, 622.000 ns/call
average length 1000000 -> avg time: 0.062 ns/byte, 62000.000 ns/call
chqrlie> gcc -std=c99 -O1 benchstrlen.c && ./a.out
average length 0 -> avg time: 20.000 ns/byte, 20.000 ns/call
average length 4 -> avg time: 3.818 ns/byte, 21.000 ns/call
average length 10 -> avg time: 2.190 ns/byte, 23.000 ns/call
average length 50 -> avg time: 0.990 ns/byte, 50.000 ns/call
average length 100 -> avg time: 0.816 ns/byte, 82.000 ns/call
average length 500 -> avg time: 0.679 ns/byte, 340.000 ns/call
average length 1000 -> avg time: 0.664 ns/byte, 664.000 ns/call
average length 5000 -> avg time: 0.651 ns/byte, 3254.000 ns/call
average length 10000 -> avg time: 0.649 ns/byte, 6491.000 ns/call
average length 1000000 -> avg time: 0.648 ns/byte, 648000.000 ns/call
chqrlie> gcc -std=c99 -O2 benchstrlen.c && ./a.out
average length 0 -> avg time: 10.000 ns/byte, 10.000 ns/call
average length 4 -> avg time: 2.000 ns/byte, 11.000 ns/call
average length 10 -> avg time: 1.048 ns/byte, 11.000 ns/call
average length 50 -> avg time: 0.337 ns/byte, 17.000 ns/call
average length 100 -> avg time: 0.299 ns/byte, 30.000 ns/call
average length 500 -> avg time: 0.202 ns/byte, 101.000 ns/call
average length 1000 -> avg time: 0.188 ns/byte, 188.000 ns/call
average length 5000 -> avg time: 0.174 ns/byte, 868.000 ns/call
average length 10000 -> avg time: 0.172 ns/byte, 1716.000 ns/call
average length 1000000 -> avg time: 0.172 ns/byte, 172000.000 ns/call





share|improve this answer

























  • Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

    – Daniel H
    1 hour ago






  • 1





    It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

    – chqrlie
    1 hour ago











  • Does it change if you use -march=native -mtune=native?

    – Deduplicator
    31 mins ago











  • Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

    – Brendan
    29 mins ago












  • @Brendan: the OP is trying to benchmark strlen. He does focus the benchmark on insanely long strings for which the C library strlen outperforms inline expansion hands down. I improved this benchmark and tested various string lengths. It appears from the benchmarks on linux with gcc (Debian 4.7.2-5) 4.7.2 running on an Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz that the inline code generated by -O1 is always slower, by as much as a factor of 10 for moderately long strings, while -O2 is only slightly faster than the libc strlen for very short strings and half as fast for longer strings.

    – chqrlie
    5 mins ago











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55563598%2fwhy-is-this-code-6-5x-slower-with-optimizations-enabled%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









5














Testing your code on Godbolt's Compiler Explorer provides this explanation:



  • at -O0 or without optimisations, the generated code call the C library function strlen

  • at -O1 the generated code uses a simple inline expansion using a rep scasb instruction.

  • at -O2 and above, the generated code uses a more elaborate inline expansion.

Benchmarking your code repeatedly shows a substantial variation from one run to another, but increasing the number of iterations shows that:



  • the -O1 code is much slower than the C library implementation: 32240 vs 3090

  • the -O2 code is faster than the -O1 but still substantially slower than the C ibrary code: 8570 vs 3090.

This behavior is specific to gcc and the glibc. The same test on OS/X with clang and Apple's Libc does not show a significant difference, which is not a surprise as Godbolt shows that clang generates a call to the C library strlen at all optimisation levels.



This could be considered a bug in gcc/glibc but more extensive benchmarking might show that the overhead of calling strlen has a more important impact than the lack of performance of the inline code for small strings. The strings on which you benchmark are uncommonly large, so focusing the benchmark on ultra-long strings might not give meaningful results.



I updated the benchmark for smaller strings and it shows similar performance for string lengths varying from 0 to 100 at -O0 and -O2 but still a much worse performance at -O1, 3 times slower.



Here is the updated code:



#include <stdlib.h>
#include <string.h>
#include <time.h>

void benchmark(int repeat, int minlen, int maxlen)
char *s = malloc(maxlen + 1);
memset(s, 'A', minlen);
long long bytes = 0, calls = 0;
clock_t clk = clock();
for (int n = 0; n < repeat; n++)
for (int i = minlen; i < maxlen; ++i)
bytes += i + 1;
calls += 1;
s[i] = '';
s[strlen(s)] = 'A';


clk = clock() - clk;
free(s);
double avglen = (minlen + maxlen - 1) / 2.0;
double ns = (double)clk * 1e9 / CLOCKS_PER_SEC;
printf("average length %7.0f -> avg time: %7.3f ns/byte, %7.3f ns/calln",
avglen, ns / bytes, ns / calls);


int main()
benchmark(10000000, 0, 1);
benchmark(1000000, 0, 10);
benchmark(1000000, 5, 15);
benchmark(100000, 0, 100);
benchmark(100000, 50, 150);
benchmark(10000, 0, 1000);
benchmark(10000, 500, 1500);
benchmark(1000, 0, 10000);
benchmark(1000, 5000, 15000);
benchmark(100, 1000000 - 50, 1000000 + 50);
return 0;



Here is the output:




chqrlie> gcc -std=c99 -O0 benchstrlen.c && ./a.out
average length 0 -> avg time: 14.000 ns/byte, 14.000 ns/call
average length 4 -> avg time: 2.364 ns/byte, 13.000 ns/call
average length 10 -> avg time: 1.238 ns/byte, 13.000 ns/call
average length 50 -> avg time: 0.317 ns/byte, 16.000 ns/call
average length 100 -> avg time: 0.169 ns/byte, 17.000 ns/call
average length 500 -> avg time: 0.074 ns/byte, 37.000 ns/call
average length 1000 -> avg time: 0.068 ns/byte, 68.000 ns/call
average length 5000 -> avg time: 0.064 ns/byte, 318.000 ns/call
average length 10000 -> avg time: 0.062 ns/byte, 622.000 ns/call
average length 1000000 -> avg time: 0.062 ns/byte, 62000.000 ns/call
chqrlie> gcc -std=c99 -O1 benchstrlen.c && ./a.out
average length 0 -> avg time: 20.000 ns/byte, 20.000 ns/call
average length 4 -> avg time: 3.818 ns/byte, 21.000 ns/call
average length 10 -> avg time: 2.190 ns/byte, 23.000 ns/call
average length 50 -> avg time: 0.990 ns/byte, 50.000 ns/call
average length 100 -> avg time: 0.816 ns/byte, 82.000 ns/call
average length 500 -> avg time: 0.679 ns/byte, 340.000 ns/call
average length 1000 -> avg time: 0.664 ns/byte, 664.000 ns/call
average length 5000 -> avg time: 0.651 ns/byte, 3254.000 ns/call
average length 10000 -> avg time: 0.649 ns/byte, 6491.000 ns/call
average length 1000000 -> avg time: 0.648 ns/byte, 648000.000 ns/call
chqrlie> gcc -std=c99 -O2 benchstrlen.c && ./a.out
average length 0 -> avg time: 10.000 ns/byte, 10.000 ns/call
average length 4 -> avg time: 2.000 ns/byte, 11.000 ns/call
average length 10 -> avg time: 1.048 ns/byte, 11.000 ns/call
average length 50 -> avg time: 0.337 ns/byte, 17.000 ns/call
average length 100 -> avg time: 0.299 ns/byte, 30.000 ns/call
average length 500 -> avg time: 0.202 ns/byte, 101.000 ns/call
average length 1000 -> avg time: 0.188 ns/byte, 188.000 ns/call
average length 5000 -> avg time: 0.174 ns/byte, 868.000 ns/call
average length 10000 -> avg time: 0.172 ns/byte, 1716.000 ns/call
average length 1000000 -> avg time: 0.172 ns/byte, 172000.000 ns/call





share|improve this answer

























  • Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

    – Daniel H
    1 hour ago






  • 1





    It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

    – chqrlie
    1 hour ago











  • Does it change if you use -march=native -mtune=native?

    – Deduplicator
    31 mins ago











  • Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

    – Brendan
    29 mins ago












  • @Brendan: the OP is trying to benchmark strlen. He does focus the benchmark on insanely long strings for which the C library strlen outperforms inline expansion hands down. I improved this benchmark and tested various string lengths. It appears from the benchmarks on linux with gcc (Debian 4.7.2-5) 4.7.2 running on an Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz that the inline code generated by -O1 is always slower, by as much as a factor of 10 for moderately long strings, while -O2 is only slightly faster than the libc strlen for very short strings and half as fast for longer strings.

    – chqrlie
    5 mins ago















5














Testing your code on Godbolt's Compiler Explorer provides this explanation:



  • at -O0 or without optimisations, the generated code call the C library function strlen

  • at -O1 the generated code uses a simple inline expansion using a rep scasb instruction.

  • at -O2 and above, the generated code uses a more elaborate inline expansion.

Benchmarking your code repeatedly shows a substantial variation from one run to another, but increasing the number of iterations shows that:



  • the -O1 code is much slower than the C library implementation: 32240 vs 3090

  • the -O2 code is faster than the -O1 but still substantially slower than the C ibrary code: 8570 vs 3090.

This behavior is specific to gcc and the glibc. The same test on OS/X with clang and Apple's Libc does not show a significant difference, which is not a surprise as Godbolt shows that clang generates a call to the C library strlen at all optimisation levels.



This could be considered a bug in gcc/glibc but more extensive benchmarking might show that the overhead of calling strlen has a more important impact than the lack of performance of the inline code for small strings. The strings on which you benchmark are uncommonly large, so focusing the benchmark on ultra-long strings might not give meaningful results.



I updated the benchmark for smaller strings and it shows similar performance for string lengths varying from 0 to 100 at -O0 and -O2 but still a much worse performance at -O1, 3 times slower.



Here is the updated code:



#include <stdlib.h>
#include <string.h>
#include <time.h>

void benchmark(int repeat, int minlen, int maxlen)
char *s = malloc(maxlen + 1);
memset(s, 'A', minlen);
long long bytes = 0, calls = 0;
clock_t clk = clock();
for (int n = 0; n < repeat; n++)
for (int i = minlen; i < maxlen; ++i)
bytes += i + 1;
calls += 1;
s[i] = '';
s[strlen(s)] = 'A';


clk = clock() - clk;
free(s);
double avglen = (minlen + maxlen - 1) / 2.0;
double ns = (double)clk * 1e9 / CLOCKS_PER_SEC;
printf("average length %7.0f -> avg time: %7.3f ns/byte, %7.3f ns/calln",
avglen, ns / bytes, ns / calls);


int main()
benchmark(10000000, 0, 1);
benchmark(1000000, 0, 10);
benchmark(1000000, 5, 15);
benchmark(100000, 0, 100);
benchmark(100000, 50, 150);
benchmark(10000, 0, 1000);
benchmark(10000, 500, 1500);
benchmark(1000, 0, 10000);
benchmark(1000, 5000, 15000);
benchmark(100, 1000000 - 50, 1000000 + 50);
return 0;



Here is the output:




chqrlie> gcc -std=c99 -O0 benchstrlen.c && ./a.out
average length 0 -> avg time: 14.000 ns/byte, 14.000 ns/call
average length 4 -> avg time: 2.364 ns/byte, 13.000 ns/call
average length 10 -> avg time: 1.238 ns/byte, 13.000 ns/call
average length 50 -> avg time: 0.317 ns/byte, 16.000 ns/call
average length 100 -> avg time: 0.169 ns/byte, 17.000 ns/call
average length 500 -> avg time: 0.074 ns/byte, 37.000 ns/call
average length 1000 -> avg time: 0.068 ns/byte, 68.000 ns/call
average length 5000 -> avg time: 0.064 ns/byte, 318.000 ns/call
average length 10000 -> avg time: 0.062 ns/byte, 622.000 ns/call
average length 1000000 -> avg time: 0.062 ns/byte, 62000.000 ns/call
chqrlie> gcc -std=c99 -O1 benchstrlen.c && ./a.out
average length 0 -> avg time: 20.000 ns/byte, 20.000 ns/call
average length 4 -> avg time: 3.818 ns/byte, 21.000 ns/call
average length 10 -> avg time: 2.190 ns/byte, 23.000 ns/call
average length 50 -> avg time: 0.990 ns/byte, 50.000 ns/call
average length 100 -> avg time: 0.816 ns/byte, 82.000 ns/call
average length 500 -> avg time: 0.679 ns/byte, 340.000 ns/call
average length 1000 -> avg time: 0.664 ns/byte, 664.000 ns/call
average length 5000 -> avg time: 0.651 ns/byte, 3254.000 ns/call
average length 10000 -> avg time: 0.649 ns/byte, 6491.000 ns/call
average length 1000000 -> avg time: 0.648 ns/byte, 648000.000 ns/call
chqrlie> gcc -std=c99 -O2 benchstrlen.c && ./a.out
average length 0 -> avg time: 10.000 ns/byte, 10.000 ns/call
average length 4 -> avg time: 2.000 ns/byte, 11.000 ns/call
average length 10 -> avg time: 1.048 ns/byte, 11.000 ns/call
average length 50 -> avg time: 0.337 ns/byte, 17.000 ns/call
average length 100 -> avg time: 0.299 ns/byte, 30.000 ns/call
average length 500 -> avg time: 0.202 ns/byte, 101.000 ns/call
average length 1000 -> avg time: 0.188 ns/byte, 188.000 ns/call
average length 5000 -> avg time: 0.174 ns/byte, 868.000 ns/call
average length 10000 -> avg time: 0.172 ns/byte, 1716.000 ns/call
average length 1000000 -> avg time: 0.172 ns/byte, 172000.000 ns/call





share|improve this answer

























  • Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

    – Daniel H
    1 hour ago






  • 1





    It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

    – chqrlie
    1 hour ago











  • Does it change if you use -march=native -mtune=native?

    – Deduplicator
    31 mins ago











  • Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

    – Brendan
    29 mins ago












  • @Brendan: the OP is trying to benchmark strlen. He does focus the benchmark on insanely long strings for which the C library strlen outperforms inline expansion hands down. I improved this benchmark and tested various string lengths. It appears from the benchmarks on linux with gcc (Debian 4.7.2-5) 4.7.2 running on an Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz that the inline code generated by -O1 is always slower, by as much as a factor of 10 for moderately long strings, while -O2 is only slightly faster than the libc strlen for very short strings and half as fast for longer strings.

    – chqrlie
    5 mins ago













5












5








5







Testing your code on Godbolt's Compiler Explorer provides this explanation:



  • at -O0 or without optimisations, the generated code call the C library function strlen

  • at -O1 the generated code uses a simple inline expansion using a rep scasb instruction.

  • at -O2 and above, the generated code uses a more elaborate inline expansion.

Benchmarking your code repeatedly shows a substantial variation from one run to another, but increasing the number of iterations shows that:



  • the -O1 code is much slower than the C library implementation: 32240 vs 3090

  • the -O2 code is faster than the -O1 but still substantially slower than the C ibrary code: 8570 vs 3090.

This behavior is specific to gcc and the glibc. The same test on OS/X with clang and Apple's Libc does not show a significant difference, which is not a surprise as Godbolt shows that clang generates a call to the C library strlen at all optimisation levels.



This could be considered a bug in gcc/glibc but more extensive benchmarking might show that the overhead of calling strlen has a more important impact than the lack of performance of the inline code for small strings. The strings on which you benchmark are uncommonly large, so focusing the benchmark on ultra-long strings might not give meaningful results.



I updated the benchmark for smaller strings and it shows similar performance for string lengths varying from 0 to 100 at -O0 and -O2 but still a much worse performance at -O1, 3 times slower.



Here is the updated code:



#include <stdlib.h>
#include <string.h>
#include <time.h>

void benchmark(int repeat, int minlen, int maxlen)
char *s = malloc(maxlen + 1);
memset(s, 'A', minlen);
long long bytes = 0, calls = 0;
clock_t clk = clock();
for (int n = 0; n < repeat; n++)
for (int i = minlen; i < maxlen; ++i)
bytes += i + 1;
calls += 1;
s[i] = '';
s[strlen(s)] = 'A';


clk = clock() - clk;
free(s);
double avglen = (minlen + maxlen - 1) / 2.0;
double ns = (double)clk * 1e9 / CLOCKS_PER_SEC;
printf("average length %7.0f -> avg time: %7.3f ns/byte, %7.3f ns/calln",
avglen, ns / bytes, ns / calls);


int main()
benchmark(10000000, 0, 1);
benchmark(1000000, 0, 10);
benchmark(1000000, 5, 15);
benchmark(100000, 0, 100);
benchmark(100000, 50, 150);
benchmark(10000, 0, 1000);
benchmark(10000, 500, 1500);
benchmark(1000, 0, 10000);
benchmark(1000, 5000, 15000);
benchmark(100, 1000000 - 50, 1000000 + 50);
return 0;



Here is the output:




chqrlie> gcc -std=c99 -O0 benchstrlen.c && ./a.out
average length 0 -> avg time: 14.000 ns/byte, 14.000 ns/call
average length 4 -> avg time: 2.364 ns/byte, 13.000 ns/call
average length 10 -> avg time: 1.238 ns/byte, 13.000 ns/call
average length 50 -> avg time: 0.317 ns/byte, 16.000 ns/call
average length 100 -> avg time: 0.169 ns/byte, 17.000 ns/call
average length 500 -> avg time: 0.074 ns/byte, 37.000 ns/call
average length 1000 -> avg time: 0.068 ns/byte, 68.000 ns/call
average length 5000 -> avg time: 0.064 ns/byte, 318.000 ns/call
average length 10000 -> avg time: 0.062 ns/byte, 622.000 ns/call
average length 1000000 -> avg time: 0.062 ns/byte, 62000.000 ns/call
chqrlie> gcc -std=c99 -O1 benchstrlen.c && ./a.out
average length 0 -> avg time: 20.000 ns/byte, 20.000 ns/call
average length 4 -> avg time: 3.818 ns/byte, 21.000 ns/call
average length 10 -> avg time: 2.190 ns/byte, 23.000 ns/call
average length 50 -> avg time: 0.990 ns/byte, 50.000 ns/call
average length 100 -> avg time: 0.816 ns/byte, 82.000 ns/call
average length 500 -> avg time: 0.679 ns/byte, 340.000 ns/call
average length 1000 -> avg time: 0.664 ns/byte, 664.000 ns/call
average length 5000 -> avg time: 0.651 ns/byte, 3254.000 ns/call
average length 10000 -> avg time: 0.649 ns/byte, 6491.000 ns/call
average length 1000000 -> avg time: 0.648 ns/byte, 648000.000 ns/call
chqrlie> gcc -std=c99 -O2 benchstrlen.c && ./a.out
average length 0 -> avg time: 10.000 ns/byte, 10.000 ns/call
average length 4 -> avg time: 2.000 ns/byte, 11.000 ns/call
average length 10 -> avg time: 1.048 ns/byte, 11.000 ns/call
average length 50 -> avg time: 0.337 ns/byte, 17.000 ns/call
average length 100 -> avg time: 0.299 ns/byte, 30.000 ns/call
average length 500 -> avg time: 0.202 ns/byte, 101.000 ns/call
average length 1000 -> avg time: 0.188 ns/byte, 188.000 ns/call
average length 5000 -> avg time: 0.174 ns/byte, 868.000 ns/call
average length 10000 -> avg time: 0.172 ns/byte, 1716.000 ns/call
average length 1000000 -> avg time: 0.172 ns/byte, 172000.000 ns/call





share|improve this answer















Testing your code on Godbolt's Compiler Explorer provides this explanation:



  • at -O0 or without optimisations, the generated code call the C library function strlen

  • at -O1 the generated code uses a simple inline expansion using a rep scasb instruction.

  • at -O2 and above, the generated code uses a more elaborate inline expansion.

Benchmarking your code repeatedly shows a substantial variation from one run to another, but increasing the number of iterations shows that:



  • the -O1 code is much slower than the C library implementation: 32240 vs 3090

  • the -O2 code is faster than the -O1 but still substantially slower than the C ibrary code: 8570 vs 3090.

This behavior is specific to gcc and the glibc. The same test on OS/X with clang and Apple's Libc does not show a significant difference, which is not a surprise as Godbolt shows that clang generates a call to the C library strlen at all optimisation levels.



This could be considered a bug in gcc/glibc but more extensive benchmarking might show that the overhead of calling strlen has a more important impact than the lack of performance of the inline code for small strings. The strings on which you benchmark are uncommonly large, so focusing the benchmark on ultra-long strings might not give meaningful results.



I updated the benchmark for smaller strings and it shows similar performance for string lengths varying from 0 to 100 at -O0 and -O2 but still a much worse performance at -O1, 3 times slower.



Here is the updated code:



#include <stdlib.h>
#include <string.h>
#include <time.h>

void benchmark(int repeat, int minlen, int maxlen)
char *s = malloc(maxlen + 1);
memset(s, 'A', minlen);
long long bytes = 0, calls = 0;
clock_t clk = clock();
for (int n = 0; n < repeat; n++)
for (int i = minlen; i < maxlen; ++i)
bytes += i + 1;
calls += 1;
s[i] = '';
s[strlen(s)] = 'A';


clk = clock() - clk;
free(s);
double avglen = (minlen + maxlen - 1) / 2.0;
double ns = (double)clk * 1e9 / CLOCKS_PER_SEC;
printf("average length %7.0f -> avg time: %7.3f ns/byte, %7.3f ns/calln",
avglen, ns / bytes, ns / calls);


int main()
benchmark(10000000, 0, 1);
benchmark(1000000, 0, 10);
benchmark(1000000, 5, 15);
benchmark(100000, 0, 100);
benchmark(100000, 50, 150);
benchmark(10000, 0, 1000);
benchmark(10000, 500, 1500);
benchmark(1000, 0, 10000);
benchmark(1000, 5000, 15000);
benchmark(100, 1000000 - 50, 1000000 + 50);
return 0;



Here is the output:




chqrlie> gcc -std=c99 -O0 benchstrlen.c && ./a.out
average length 0 -> avg time: 14.000 ns/byte, 14.000 ns/call
average length 4 -> avg time: 2.364 ns/byte, 13.000 ns/call
average length 10 -> avg time: 1.238 ns/byte, 13.000 ns/call
average length 50 -> avg time: 0.317 ns/byte, 16.000 ns/call
average length 100 -> avg time: 0.169 ns/byte, 17.000 ns/call
average length 500 -> avg time: 0.074 ns/byte, 37.000 ns/call
average length 1000 -> avg time: 0.068 ns/byte, 68.000 ns/call
average length 5000 -> avg time: 0.064 ns/byte, 318.000 ns/call
average length 10000 -> avg time: 0.062 ns/byte, 622.000 ns/call
average length 1000000 -> avg time: 0.062 ns/byte, 62000.000 ns/call
chqrlie> gcc -std=c99 -O1 benchstrlen.c && ./a.out
average length 0 -> avg time: 20.000 ns/byte, 20.000 ns/call
average length 4 -> avg time: 3.818 ns/byte, 21.000 ns/call
average length 10 -> avg time: 2.190 ns/byte, 23.000 ns/call
average length 50 -> avg time: 0.990 ns/byte, 50.000 ns/call
average length 100 -> avg time: 0.816 ns/byte, 82.000 ns/call
average length 500 -> avg time: 0.679 ns/byte, 340.000 ns/call
average length 1000 -> avg time: 0.664 ns/byte, 664.000 ns/call
average length 5000 -> avg time: 0.651 ns/byte, 3254.000 ns/call
average length 10000 -> avg time: 0.649 ns/byte, 6491.000 ns/call
average length 1000000 -> avg time: 0.648 ns/byte, 648000.000 ns/call
chqrlie> gcc -std=c99 -O2 benchstrlen.c && ./a.out
average length 0 -> avg time: 10.000 ns/byte, 10.000 ns/call
average length 4 -> avg time: 2.000 ns/byte, 11.000 ns/call
average length 10 -> avg time: 1.048 ns/byte, 11.000 ns/call
average length 50 -> avg time: 0.337 ns/byte, 17.000 ns/call
average length 100 -> avg time: 0.299 ns/byte, 30.000 ns/call
average length 500 -> avg time: 0.202 ns/byte, 101.000 ns/call
average length 1000 -> avg time: 0.188 ns/byte, 188.000 ns/call
average length 5000 -> avg time: 0.174 ns/byte, 868.000 ns/call
average length 10000 -> avg time: 0.172 ns/byte, 1716.000 ns/call
average length 1000000 -> avg time: 0.172 ns/byte, 172000.000 ns/call






share|improve this answer














share|improve this answer



share|improve this answer








edited 41 mins ago

























answered 1 hour ago









chqrliechqrlie

62.9k848107




62.9k848107












  • Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

    – Daniel H
    1 hour ago






  • 1





    It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

    – chqrlie
    1 hour ago











  • Does it change if you use -march=native -mtune=native?

    – Deduplicator
    31 mins ago











  • Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

    – Brendan
    29 mins ago












  • @Brendan: the OP is trying to benchmark strlen. He does focus the benchmark on insanely long strings for which the C library strlen outperforms inline expansion hands down. I improved this benchmark and tested various string lengths. It appears from the benchmarks on linux with gcc (Debian 4.7.2-5) 4.7.2 running on an Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz that the inline code generated by -O1 is always slower, by as much as a factor of 10 for moderately long strings, while -O2 is only slightly faster than the libc strlen for very short strings and half as fast for longer strings.

    – chqrlie
    5 mins ago

















  • Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

    – Daniel H
    1 hour ago






  • 1





    It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

    – chqrlie
    1 hour ago











  • Does it change if you use -march=native -mtune=native?

    – Deduplicator
    31 mins ago











  • Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

    – Brendan
    29 mins ago












  • @Brendan: the OP is trying to benchmark strlen. He does focus the benchmark on insanely long strings for which the C library strlen outperforms inline expansion hands down. I improved this benchmark and tested various string lengths. It appears from the benchmarks on linux with gcc (Debian 4.7.2-5) 4.7.2 running on an Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz that the inline code generated by -O1 is always slower, by as much as a factor of 10 for moderately long strings, while -O2 is only slightly faster than the libc strlen for very short strings and half as fast for longer strings.

    – chqrlie
    5 mins ago
















Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

– Daniel H
1 hour ago





Wouldn't it still be better for the inlined version to use the same optimizations as the library strlen, giving the best of both worlds?

– Daniel H
1 hour ago




1




1





It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

– chqrlie
1 hour ago





It would, but the hand optimized version in the C library might be larger and more complicated to inline. I have not looked into this recently, but there used to be a mix of complex platform specific macros in <string.h> and hard coded optimisations in the gcc code generator. Definitely still room for improvement on intel targets.

– chqrlie
1 hour ago













Does it change if you use -march=native -mtune=native?

– Deduplicator
31 mins ago





Does it change if you use -march=native -mtune=native?

– Deduplicator
31 mins ago













Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

– Brendan
29 mins ago






Note that the GNU C library function for strlen() is likely optimised for extremely large strings (that no sane programmer will care about) at the expense of performance for small strings (that are extremely common); and the optimisations done by the library version should never be done. The problem here is that the OP's code doesn't keep track of the string's length itself (e.g. with an int len; variable) and should not have used strlen() at all, making the code so bad for performance that "optimised for something no sane programmer would care about" actually helped.

– Brendan
29 mins ago














@Brendan: the OP is trying to benchmark strlen. He does focus the benchmark on insanely long strings for which the C library strlen outperforms inline expansion hands down. I improved this benchmark and tested various string lengths. It appears from the benchmarks on linux with gcc (Debian 4.7.2-5) 4.7.2 running on an Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz that the inline code generated by -O1 is always slower, by as much as a factor of 10 for moderately long strings, while -O2 is only slightly faster than the libc strlen for very short strings and half as fast for longer strings.

– chqrlie
5 mins ago





@Brendan: the OP is trying to benchmark strlen. He does focus the benchmark on insanely long strings for which the C library strlen outperforms inline expansion hands down. I improved this benchmark and tested various string lengths. It appears from the benchmarks on linux with gcc (Debian 4.7.2-5) 4.7.2 running on an Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz that the inline code generated by -O1 is always slower, by as much as a factor of 10 for moderately long strings, while -O2 is only slightly faster than the libc strlen for very short strings and half as fast for longer strings.

– chqrlie
5 mins ago



















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55563598%2fwhy-is-this-code-6-5x-slower-with-optimizations-enabled%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Dapidodigma demeter Subspecies | Notae | Tabula navigationisDapidodigmaAfrotropical Butterflies: Lycaenidae - Subtribe IolainaAmplifica

Constantinus Vanšenkin Nexus externi | Tabula navigationisБольшая российская энциклопедияAmplifica

Gaius Norbanus Flaccus (consul 38 a.C.n.) Index De gente | De cursu honorum | Notae | Fontes | Si vis plura legere | Tabula navigationisHic legere potes