You already have tons of these bug reports, and I bet that you have closed them all "by design" and told people to use the CASE statement. But, no, you are wrong. This is a bug, and should be fixed. Although the fix is to add new functionality, hang on.
Consider this silly repro:
create table albert(a int NOT NULL,
b varchar(23) NOT NULL)
create table stina (a int NOT NULL)
insert albert (a, b)
values (1, '99'),
insert stina (a) values(1), (3), (9), (12)
SELECT a.a, a.b + 100
FROM albert a
JOIN stina s ON s.a = a.a
This query fails with "Conversion failed when converting the varchar value 'Gurka' to data type int.", despite this row would never appear in the output if the error was not raised. This is because the SELECT list is computed before the filtering. This is may be an OK optimization, but instead of raising the error on the spot, you should defer the error, and the operator should insert an error for the column value in the output stream, and only if there is a reference to this column value further afield, an error should be raised.
A similar case is this query (same data as above)
WITH numonly AS (
SELECT a, b
WHERE b NOT LIKE '%[^0-9]%'
SELECT a, b
WHERE b < 100
This query should never fail with "Conversion failed when converting the varchar value 'Gurka' to data type int.", even if there is non-numeric data in b. Logically, I am filtering out the non-numeric values in the CTE. Yes, I know you recast calculation order as you see fit. And certainly, if you didn't, performance would suffer in many cases. So keep recast the calculation order, but you need to add a prioritation order so that internally this is computed as
SELECT a, b FROM numonly WHERE b NOT LIKE '%[^0-9]%' AND THEN b < 100
For more complex queries it may not be equally trivial, but I have in this Connect item suggest two mechanisms to avoid illogical errors to be raised.