clk: rockchip: convert rk3399 pll type to use

A patch from »clk: rockchip: convert rk3399 pll type to use« in state Mainline for linux-kernel

From: Heiko Stuebner <heiko.stuebner@...> Date: Mon, 20 Jan 2020 10:47:45 +0100

Commit-Message

Instead of open coding the polling of the lock status, use the handy readl_poll_timeout for this. As the pll locking is normally blazingly fast and we don't want to incur additional delays, we're not doing any sleeps similar to for example the imx clk-pllv4 and define a very safe but still short timeout of 1ms. Suggested-by: Stephen Boyd <sboyd@...> Signed-off-by: Heiko Stuebner <heiko.stuebner@...>

Patch-Comment

drivers/clk/rockchip/clk-pll.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-)

Statistics

  • 10 lines added
  • 11 lines removed

Changes

------------------------ drivers/clk/rockchip/clk-pll.c ------------------------
index 198417d56300..43c9fd0086a2 100644
@@ -585,19 +585,18 @@ static const struct clk_ops rockchip_rk3066_pll_clk_ops = {
static int rockchip_rk3399_pll_wait_lock(struct rockchip_clk_pll *pll)
{
u32 pllcon;
- int delay = 24000000;
+ int ret;
- /* poll check the lock status in rk3399 xPLLCON2 */
- while (delay > 0) {
- pllcon = readl_relaxed(pll->reg_base + RK3399_PLLCON(2));
- if (pllcon & RK3399_PLLCON2_LOCK_STATUS)
- return 0;
+ /*
+ * Lock time typical 250, max 500 input clock cycles @24MHz
+ * So define a very safe maximum of 1000us, meaning 24000 cycles.
+ */
+ ret = readl_poll_timeout(pll->reg_base + RK3399_PLLCON(2), pllcon,
+ pllcon & RK3399_PLLCON2_LOCK_STATUS, 0, 1000);
+ if (ret)
+ pr_err("%s: timeout waiting for pll to lock\n", __func__);
- delay--;
- }
-
- pr_err("%s: timeout waiting for pll to lock\n", __func__);
- return -ETIMEDOUT;
+ return ret;
}
static void rockchip_rk3399_pll_get_params(struct rockchip_clk_pll *pll,
 
 

Recent Patches

About Us

Sed lacus. Donec lectus. Nullam pretium nibh ut turpis. Nam bibendum. In nulla tortor, elementum vel, tempor at, varius non, purus. Mauris vitae nisl nec metus placerat consectetuer.

Read More...