[LITMUS^RT] ft-trace-overheads
Martinez Garcia Jorge Luis (PS-EC/ESB2)
JorgeLuis.MartinezGarcia at de.bosch.com
Mon Nov 26 15:04:00 CET 2018
Hello Björn,
Reducing the Feather-Trace buffer size did the trick.
Best,
Jorge
-----Original Message-----
From: litmus-dev <litmus-dev-bounces at lists.litmus-rt.org> On Behalf Of Björn Brandenburg
Sent: Donnerstag, 8. November 2018 14:24
To: litmus-dev at lists.litmus-rt.org
Subject: Re: [LITMUS^RT] ft-trace-overheads
> On 4. Nov 2018, at 14:15, Martinez Garcia Jorge Luis (PS-EC/ESB2) <JorgeLuis.MartinezGarcia at de.bosch.com> wrote:
> I’m running Litmus^RT with a reservation-based plugin on top of a RPi3. While trying to trace and process system overheads by means of the “ft-trace-overheads” script, like this :
>
> […]
> [ 1153.814976] ftcat invoked oom-killer:
> gfp_mask=0x24002c2(GFP_KERNEL|__GFP_HIGHMEM|__GFP_NOWARN), nodemask=0,
> order=0, oom_score_adj=0 […]
So allocating buffers to hold Feather-Trace samples caused the system to realize that it overcommitted on memory. Essentially, the Feather-Trace buffers were too large for the system to handle.
> Did you face a similar issue?
I vaguely remember running into similar issues, but don’t recall the specifics.
> Do you think that vmalloc=512 could help me solve the problem?
I don’t recall off the top of my head what this option does. The first step would be to make sure you are running a minimal system (no unnecessary daemons etc.) to save as much memory as possible. Then try to reduce the Feather-Trace buffer size so that you just barely don’t lose any trace samples.
Regards,
Björn
_______________________________________________
litmus-dev mailing list
litmus-dev at lists.litmus-rt.org
https://lists.litmus-rt.org/listinfo/litmus-dev
More information about the litmus-dev
mailing list