std::this_thread::yield()

permal
Posts: 384
Joined: Sun May 14, 2017 5:36 pm

std::this_thread::yield()

Postby permal » Sun Aug 05, 2018 1:26 pm

Hi,

I just got back into development for the ESP32 after a few months of absence. First thing I wanted to do was to bring my Smooth framework up to par with the current IDF master-branch (and later the CMake-branch). To that end I've deployed the tool chain on a fresh Ubuntu 18.04.

I've got a task that either handles socket communication or calls std::this_thread::yield(). In my test applications that don't use any sockets, this task literally only calls std::this_thread::yield() over and over. This has been working fine since October last year (based on the git history for the code line in question). The last app I wrote was in April and this still functioned at that time.

With the current version of IDF/toolchain the yield call seems to have no effect as the watchdog for idle-tasks gets triggered and the task seemingly just spins:
Task watchdog got triggered. The following tasks did not reset the watchdog in time:
- IDLE (CPU 0)
Tasks currently running:
CPU 0: pthread
CPU 1: IDLE
Exchanging the yield() call to a less desirable std::this_thread::sleep_for(std::chrono::milliseconds(1)); remedies the situation.

So, my question is: Are there any recent (April and later) changes to the scheduler that would affect(break?) std::this_thread::yield()

permal
Posts: 384
Joined: Sun May 14, 2017 5:36 pm

Re: std::this_thread::yield()

Postby permal » Sun Aug 05, 2018 5:40 pm

Example code to reproduce the behavior.

The second thread never gets any run time, the expected behavior is that a "+" is printed every 500ms.

Code: Select all

#include <thread>
#include "esp_log.h"

extern "C" void app_main()
{
    std::thread t([](){
        while(true)
        {
            std::this_thread::yield();
        }
    });

    std::thread t2([](){
        while(true)
        {
            std::this_thread::sleep_for(std::chrono::milliseconds(500));
            ESP_LOGV("A", "+");
        }
    });

    t.join();
    t2.join();
}
As a side note, replacing

Code: Select all

std::this_thread::yield();
with

Code: Select all

std::this_thread::sleep_for(std::chrono::microseconds(1000));
works, but any delay below 1000 us results in the same behavior as the yield call.

User avatar
kolban
Posts: 1683
Joined: Mon Nov 16, 2015 4:43 pm
Location: Texas, USA

Re: std::this_thread::yield()

Postby kolban » Mon Aug 06, 2018 3:50 am

I understand that the ESP-IDF leverages FreeRTOS for tasks (threads). I also believe that a Pthread layer has been implemented that, I guess, maps to the FreeRTOS vision of tasks. If I am understanding correctly, you are using the std:: C++ library of standard functions. Do we have any degree of understanding on what std:: threading is built upon? Do we have good confidence that std:: threading is available to us at the current build levels?

If we have ambiguity, could your project use FreeRTOS or pthreads explicitly instead?
Free book on ESP32 available here: https://leanpub.com/kolban-ESP32

permal
Posts: 384
Joined: Sun May 14, 2017 5:36 pm

Re: std::this_thread::yield()

Postby permal » Mon Aug 06, 2018 6:23 am

From https://github.com/espressif/esp-idf/re ... g/v3.0-rc1
libstdc++ concurrency support (std::condition_variable, std::mutex, etc)
Granted, it doesn't specifically mention yield, but it is part of the concurrency API.

Perhaps I could use the pthread API directly, but I'd prefer to know the why behind this issue before resorting to a workaround.

ESP_Angus
Posts: 2344
Joined: Sun May 08, 2016 4:11 am

Re: std::this_thread::yield()

Postby ESP_Angus » Mon Aug 06, 2018 6:50 am

Yes, pthread yielding (and libstdc++ thread yield on top of that) is implemented and supported.

The difference here is in the behaviour of "yield" on a conventional OS (like Windows/MacOS/most installs of Linux) and a real-time OS like FreeRTOS.

Yielding in FreeRTOS means "immediately allow a task of a higher priority to run, if there is one". Usually this turns out to be a no-op in ESP-IDF, because FreeRTOS is configured in preemptive mode which means under most circumstances the highest priority runnable task on each CPU is already running.

The difference between FreeRTOS and a conventional OS is that lower priority tasks will never run while a higher priority task is ready to run. So yielding from a higher priority task will never cause the lower priority idle task to run, for example. Suspending the higher priority task for 1 tick or more gives other tasks a chance to run.

I'm not sure this behaviour has ever been different in ESP-IDF, with the test app posted above I see the same behavior on v3.0-rc1 as on recent IDF versions. If the task watchdog is disabled, there's usually not any sign this kind of starvation is happening (and it may not be a problem at all: the idle task gets starved, but if nothing else is going on then this doesn't necessarily matter).

There is one important difference between yielding in vanilla FreeRTOS and yielding in ESP-IDF: ESP-IDF doesn't guarantee round-robinning of tasks with the same priority. If yielding from multiple tasks with the same priority in vanilla FreeRTOS, they will all get roughly equal CPU time. However this task design won't work as expected in ESP-IDF, the tasks need to spend some time sleeping to guarantee they all run.

In general, the solution to this kind of problem is to structure your app so it never has to poll for anything - if a task with nothing to do can stop and block on something (socket or sockets, mutex, queue, queueset, etc) rather than having to wake up periodically and poll, then the system can make the best real-time use of the CPU.

permal
Posts: 384
Joined: Sun May 14, 2017 5:36 pm

Re: std::this_thread::yield()

Postby permal » Mon Aug 06, 2018 7:12 am

Thanks Angus. No round-robin, that explains it. Though I am certain this has been working previously (been running a system for 6 months with it). Perhaps it was just by luck (i.e. different timing in older IDFs).

I can use a condition_variable instead of yield, it is actually more elegant (I should have seen that myself).

You link to https://docs.espressif.com/ and not https://esp-idf.readthedocs.io/en/latest/, which one should we use?

ESP_Angus
Posts: 2344
Joined: Sun May 08, 2016 4:11 am

Re: std::this_thread::yield()

Postby ESP_Angus » Mon Aug 06, 2018 7:22 am

Great to hear you've found a good solution.
permal wrote: You link to https://docs.espressif.com/ and not https://esp-idf.readthedocs.io/en/latest/, which one should we use?
They should have the same content (maybe one will update a few minutes before the other when it finishes building updated versions), but we've started using https://docs.espressif.com/projects/esp-idf .

permal
Posts: 384
Joined: Sun May 14, 2017 5:36 pm

Re: std::this_thread::yield()

Postby permal » Mon Aug 06, 2018 7:23 am

ESP_Angus wrote:Great to hear you've found a good solution.
permal wrote: You link to https://docs.espressif.com/ and not https://esp-idf.readthedocs.io/en/latest/, which one should we use?
They should have the same content (maybe one will update a few minutes before the other when it finishes building updated versions), but we've started using https://docs.espressif.com/projects/esp-idf .
Ok. May I suggest keep only one? Two is just confusing.

ESP_Angus
Posts: 2344
Joined: Sun May 08, 2016 4:11 am

Re: std::this_thread::yield()

Postby ESP_Angus » Mon Aug 06, 2018 7:47 am

permal wrote: Ok. May I suggest keep only one? Two is just confusing.
We've removed all Espressif published links to the old site, and we've inquired with Read The Docs about configuring the old site to redirect to the new one automatically (the old site is RTD community hosting, new one is RTD commercial).

We haven't taken the old site down completely to avoid breaking all of the existing inbound links from around the web.

EDIT: I realise I didn't really answer your question in previous reply. The answer is: use docs.espressif.com.

permal
Posts: 384
Joined: Sun May 14, 2017 5:36 pm

Re: std::this_thread::yield()

Postby permal » Mon Aug 06, 2018 8:48 am

ESP_Angus wrote:
permal wrote: We haven't taken the old site down completely to avoid breaking all of the existing inbound links from around the web.
That's reasonable thinking.

It turns out I cannot use other means than polling in this task as it manages multiple non-blocking sockets and needs to continually check them for readable/writable status (via select()) while at the same time handling events from other parts of the system. Or, in other words, this task must never be blocked. That's the reason I opted for a yield() when the task itself determined it has nothing to do.

So I see no other option than to do a std::this_thread::sleep_for instead of the yield(). However, times less than 1 ms result in the same behavior as the yield() call as I also wrote above. I take it this is due to the scheduler never getting around to starting the lower priority tasks?

Is there a way that I can calculate the minimum time a high priority task must sleep to guarantee lower priority tasks gets a chance to run, preferably at compile time? There's the setting for the tick frequency in FreeRTOS (CONFIG_FREERTOS_HZ). It's currently set att 1000Hz, so 1ms ticks - coincidence? Also, it is currently limited to max 1000Hz, is there a reason for that or could it be increased further?

Who is online

Users browsing this forum: Google [Bot] and 150 guests