Browse Source

timer: Allow delays with a 32-bit microsecond timer

The current get_timer_us() uses 64-bit arithmetic on 32-bit machines.
When implementing microsecond-level timeouts, 32-bits is plenty. Add a
new function that uses an unsigned long. On 64-bit machines this is
still 64-bit, but this doesn't introduce a penalty. On 32-bit machines
it is more efficient.

Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Simon Glass 3 years ago
parent
commit
e1ddf67cb3
2 changed files with 16 additions and 0 deletions
  1. 11 0
      include/time.h
  2. 5 0
      lib/time.c

+ 11 - 0
include/time.h

@@ -17,6 +17,17 @@ unsigned long get_timer(unsigned long base);
 unsigned long timer_get_us(void);
 uint64_t get_timer_us(uint64_t base);
 
+/**
+ * get_timer_us_long() - Get the number of elapsed microseconds
+ *
+ * This uses 32-bit arithmetic on 32-bit machines, which is enough to handle
+ * delays of over an hour. For 64-bit machines it uses a 64-bit value.
+ *
+ *@base: Base time to consider
+ *@return elapsed time since @base
+ */
+unsigned long get_timer_us_long(unsigned long base);
+
 /*
  * timer_test_add_offset()
  *

+ 5 - 0
lib/time.c

@@ -152,6 +152,11 @@ uint64_t __weak get_timer_us(uint64_t base)
 	return tick_to_time_us(get_ticks()) - base;
 }
 
+unsigned long __weak get_timer_us_long(unsigned long base)
+{
+	return timer_get_us() - base;
+}
+
 unsigned long __weak notrace timer_get_us(void)
 {
 	return tick_to_time(get_ticks() * 1000);