[LITMUS^RT] Question about SEND_RESCHED overheads
Namhoon Kim
namhoonk at cs.unc.edu
Fri Oct 2 16:34:45 CEST 2015
Hi,
I'm trying to measure IPI overheads on ARM machine. I used Freescale IMX6
quad-core processor running at 800Mhz and LITMUS^RT 2014.2 version.
I used the following commands to measure SEND_RESCHED overheads.
ftcat -v -c /dev/litmus/ft_msg_trace0
ftcat /dev/litmus/ft_msg_trace0 SEND_RESCHED_START SEND_RESCHED_END >
msg0.bin &
ftcat /dev/litmus/ft_msg_trace1 SEND_RESCHED_START SEND_RESCHED_END >
msg1.bin &
ftcat /dev/litmus/ft_msg_trace2 SEND_RESCHED_START SEND_RESCHED_END >
msg2.bin &
ftcat /dev/litmus/ft_msg_trace3 SEND_RESCHED_START SEND_RESCHED_END >
msg3.bin &
Then, I ran 9 instances of rtspin with GSN-EDF plugin for 60 seconds.
Here's the result of ftdump of msg0.bin.
struct timestamp:
size = 16
offset(seq_no) = 8
offset(cpu) = 12
offset(event) = 13
SEND_RESCHED_START seq:0 pid:769 timestamp:1009582472 cpu:0
type:RT irq:1 irqc:23
SEND_RESCHED_END seq:1 pid:0 timestamp:1181828834 cpu:0
type:BE irq:1 irqc:22
SEND_RESCHED_START seq:2 pid:775 timestamp:1434812917 cpu:0
type:RT irq:1 irqc:00
SEND_RESCHED_END seq:3 pid:0 timestamp:1477670898 cpu:0
type:BE irq:1 irqc:31
SEND_RESCHED_START seq:4 pid:769 timestamp:1636595662 cpu:0
type:RT irq:1 irqc:07
SEND_RESCHED_END seq:5 pid:0 timestamp:1566580073 cpu:0
type:BE irq:1 irqc:06
SEND_RESCHED_START seq:6 pid:773 timestamp:1685892749 cpu:0
type:RT irq:1 irqc:05
SEND_RESCHED_END seq:7 pid:0 timestamp:1885725072 cpu:0
type:BE irq:1 irqc:04
SEND_RESCHED_START seq:8 pid:773 timestamp:1819667875 cpu:0
type:RT irq:1 irqc:22
SEND_RESCHED_END seq:9 pid:0 timestamp:2042956248 cpu:0
type:BE irq:1 irqc:21
SEND_RESCHED_START seq:10 pid:771 timestamp:2457858752 cpu:0
type:RT irq:1 irqc:14
SEND_RESCHED_END seq:11 pid:0 timestamp:2831992219 cpu:0
type:BE irq:1 irqc:13
SEND_RESCHED_START seq:12 pid:773 timestamp:2751551432 cpu:0
type:RT irq:1 irqc:12
SEND_RESCHED_END seq:13 pid:0 timestamp:3903309830 cpu:0
type:BE irq:1 irqc:11
SEND_RESCHED_START seq:14 pid:770 timestamp:281473726471972 cpu:0
type:RT irq:1 irqc:26
SEND_RESCHED_END seq:15 pid:0 timestamp:4138875361 cpu:0
type:BE irq:1 irqc:25
SEND_RESCHED_START seq:16 pid:770 timestamp:281473929807105 cpu:0
type:RT irq:1 irqc:07
SEND_RESCHED_END seq:17 pid:0 timestamp:2531701 cpu:0 type:BE
irq:1 irqc:06
SEND_RESCHED_START seq:18 pid:768 timestamp:281474614564040 cpu:0
type:RT irq:1 irqc:02
SEND_RESCHED_END seq:19 pid:0 timestamp:272747425 cpu:0
type:BE irq:1 irqc:01
SEND_RESCHED_START seq:20 pid:770 timestamp:281474544210988 cpu:0
type:RT irq:1 irqc:29
SEND_RESCHED_END seq:21 pid:0 timestamp:515711061 cpu:0
type:BE irq:1 irqc:28
SEND_RESCHED_START seq:22 pid:774 timestamp:130974895 cpu:0
type:RT irq:1 irqc:18
SEND_RESCHED_END seq:23 pid:0 timestamp:819098249 cpu:0
type:BE irq:1 irqc:17
SEND_RESCHED_START seq:24 pid:770 timestamp:708054965 cpu:0
type:RT irq:1 irqc:25
SEND_RESCHED_END seq:25 pid:0 timestamp:1117979696 cpu:0
type:BE irq:1 irqc:24
SEND_RESCHED_START seq:26 pid:774 timestamp:521456038 cpu:0
type:RT irq:1 irqc:11
SEND_RESCHED_END seq:27 pid:0 timestamp:1321272823 cpu:0
type:BE irq:1 irqc:10
SEND_RESCHED_START seq:28 pid:769 timestamp:579049552 cpu:0
type:RT irq:1 irqc:12
SEND_RESCHED_END seq:29 pid:0 timestamp:1499039988 cpu:0
type:BE irq:1 irqc:11
SEND_RESCHED_START seq:30 pid:770 timestamp:1232804713 cpu:0
type:RT irq:1 irqc:27
SEND_RESCHED_END seq:31 pid:0 timestamp:1547436776 cpu:0
type:BE irq:1 irqc:26
SEND_RESCHED_START seq:32 pid:774 timestamp:1435431074 cpu:0
type:RT irq:1 irqc:11
SEND_RESCHED_END seq:33 pid:0 timestamp:1825269505 cpu:0
type:BE irq:1 irqc:10
SEND_RESCHED_START seq:34 pid:775 timestamp:1294121957 cpu:0
type:RT irq:1 irqc:21
SEND_RESCHED_END seq:35 pid:0 timestamp:2366850179 cpu:0
type:BE irq:1 irqc:20
SEND_RESCHED_START seq:36 pid:773 timestamp:1943105157 cpu:0
type:RT irq:1 irqc:13
SEND_RESCHED_END seq:37 pid:0 timestamp:2896992306 cpu:0
type:BE irq:1 irqc:12
SEND_RESCHED_START seq:38 pid:0 timestamp:2539061991 cpu:0
type:BE irq:1 irqc:12
SEND_RESCHED_END seq:39 pid:0 timestamp:3190687904 cpu:0
type:BE irq:1 irqc:11
SEND_RESCHED_START seq:40 pid:772 timestamp:2760896247 cpu:0
type:RT irq:1 irqc:08
SEND_RESCHED_END seq:41 pid:0 timestamp:3288161403 cpu:0
type:BE irq:1 irqc:07
SEND_RESCHED_START seq:42 pid:773 timestamp:281473849480902 cpu:0
type:RT irq:1 irqc:10
SEND_RESCHED_END seq:43 pid:0 timestamp:3754322649 cpu:0
type:BE irq:1 irqc:09
SEND_RESCHED_START seq:44 pid:774 timestamp:285137141 cpu:0
type:RT irq:1 irqc:01
SEND_RESCHED_END seq:45 pid:767 timestamp:92857631 cpu:0
type:RT irq:1 irqc:00
SEND_RESCHED_START seq:46 pid:774 timestamp:487247052 cpu:0
type:RT irq:1 irqc:14
SEND_RESCHED_END seq:47 pid:0 timestamp:308469858 cpu:0
type:BE irq:1 irqc:13
SEND_RESCHED_START seq:48 pid:773 timestamp:708138681 cpu:0
type:RT irq:1 irqc:09
SEND_RESCHED_END seq:49 pid:0 timestamp:601347415 cpu:0
type:BE irq:1 irqc:08
SEND_RESCHED_START seq:50 pid:774 timestamp:987758855 cpu:0
type:RT irq:1 irqc:14
SEND_RESCHED_END seq:51 pid:0 timestamp:825233273 cpu:0
type:BE irq:1 irqc:13
SEND_RESCHED_START seq:52 pid:772 timestamp:1660949863 cpu:0
type:RT irq:1 irqc:28
SEND_RESCHED_END seq:53 pid:0 timestamp:936570580 cpu:0
type:BE irq:1 irqc:27
SEND_RESCHED_START seq:54 pid:0 timestamp:1899186061 cpu:0
type:BE irq:1 irqc:22
SEND_RESCHED_END seq:55 pid:0 timestamp:1143122439 cpu:0
type:BE irq:1 irqc:21
According to this log, the maximum value (SEND_RESCHED_END -
SEND_RESCHED_START) is 1151758398 (except the overflow values) cycles. It
is 1.439 sec. This value doesn't match with the value shown in kernelshark.
Also, the timestamp of seq. 5 is earlier than the previous timestamp. This
happens very often.
Is this the right way to measure IPI overheads (SEND_RESCHED_END -
SEND_RESCHED_START and ftcat commands)?
Does feather-trace use different timestamp for ft_cpu_traceX and
ft_msg_traceX? Because the other overheads (e.g., SCHED, CXS, TICKS,
RELEASE) are measured correctly.
I also ran the same tasks on kvm machine, and it does not have any
problems. This problem happens only on ARM machines.
Thank you for your time.
Sincerely,
Namhoon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20151002/95f1d71d/attachment.html>
More information about the litmus-dev
mailing list