An LLMs "intent" is always to give you a plausible response even if it doesn't have the "knowledge". The same behaviour in a human would be classed as lying IMHO.
From the viewpoint of an observing human, what's the difference between the robot saying something which is believes to be true but isn't (very common with current software, and unlikely to change even in the distant future, see "humans, purportedly intelligent") and lying on purpose? If it lies on purpose, does the intent to lie come from the robot itself, or its programmers? Ultimately, it seems like the presence and source of intent is the only difference. Regardless, a robot will never be right about everything it says, so its statements have to be weighed in a way similar to how one would weigh statements coming from a human.
TL;DR: I expect robots to tell me untruths from time to time regardless of how I feel about it.
Hell no. Do not give machines the ability to lie. We already have enough trouble with people using technology to deceive without it choosing to be deceptive on its own.
For a "robot" or other automated appliance to be able to perform tasks in the world, it must be able to perceive the world around it in some way. For it to interact with humans, it must perceive the humans (observe their actions, interpret their instructions, and understand their intentions). The direction our technology is headed in has shown us that any such device would primarily be a surveillance platform which collects data on its users.
I don't want a smart car or a smart TV and definitely not a smart household appliance such as a refrigerator. Why would I want a self-propelled, self-aware surveillance platform under the control of a multi-billion dollar corporation in my home? or workplace? or anywhere?