Skip to content

Commit 8874a41

Browse files
arndbbp3tk0v
authored andcommitted
x86/qspinlock-paravirt: Fix missing-prototype warning
__pv_queued_spin_unlock_slowpath() is defined in a header file as a global function, and designed to be called from inline asm, but there is no prototype visible in the definition: kernel/locking/qspinlock_paravirt.h:493:1: error: no previous \ prototype for '__pv_queued_spin_unlock_slowpath' [-Werror=missing-prototypes] Add this to the x86 header that contains the inline asm calling it, and ensure this gets included before the definition, rather than after it. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230803082619.1369127-8-arnd@kernel.org
1 parent ce0a1b6 commit 8874a41

File tree

2 files changed

+12
-10
lines changed

2 files changed

+12
-10
lines changed

arch/x86/include/asm/qspinlock_paravirt.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@
44

55
#include <asm/ibt.h>
66

7+
void __lockfunc __pv_queued_spin_unlock_slowpath(struct qspinlock *lock, u8 locked);
8+
79
/*
810
* For x86-64, PV_CALLEE_SAVE_REGS_THUNK() saves and restores 8 64-bit
911
* registers. For i386, however, only 1 32-bit register needs to be saved

kernel/locking/qspinlock_paravirt.h

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -485,6 +485,16 @@ pv_wait_head_or_lock(struct qspinlock *lock, struct mcs_spinlock *node)
485485
return (u32)(atomic_read(&lock->val) | _Q_LOCKED_VAL);
486486
}
487487

488+
/*
489+
* Include the architecture specific callee-save thunk of the
490+
* __pv_queued_spin_unlock(). This thunk is put together with
491+
* __pv_queued_spin_unlock() to make the callee-save thunk and the real unlock
492+
* function close to each other sharing consecutive instruction cachelines.
493+
* Alternatively, architecture specific version of __pv_queued_spin_unlock()
494+
* can be defined.
495+
*/
496+
#include <asm/qspinlock_paravirt.h>
497+
488498
/*
489499
* PV versions of the unlock fastpath and slowpath functions to be used
490500
* instead of queued_spin_unlock().
@@ -533,16 +543,6 @@ __pv_queued_spin_unlock_slowpath(struct qspinlock *lock, u8 locked)
533543
pv_kick(node->cpu);
534544
}
535545

536-
/*
537-
* Include the architecture specific callee-save thunk of the
538-
* __pv_queued_spin_unlock(). This thunk is put together with
539-
* __pv_queued_spin_unlock() to make the callee-save thunk and the real unlock
540-
* function close to each other sharing consecutive instruction cachelines.
541-
* Alternatively, architecture specific version of __pv_queued_spin_unlock()
542-
* can be defined.
543-
*/
544-
#include <asm/qspinlock_paravirt.h>
545-
546546
#ifndef __pv_queued_spin_unlock
547547
__visible __lockfunc void __pv_queued_spin_unlock(struct qspinlock *lock)
548548
{

0 commit comments

Comments
 (0)