Union all in WebFocus-level - webfocus

How to solve such problem in WebFocus
--big sql-query first
--big sql-query second
ORDER BY 1,2,3
Gives error message in a customer server "statement size or complexity exceed server limits". If i use such
--big sql-query first
--big sql-query second
that how to use like
"SQLOUT = SQLOUT1 union all SQLOUT2 order by 1,2,3"??

While I would be concerned about the error message that you are getting, without more details, I'm not sure how to address that part of the issue. Also, I have worked with sybase in the past either.
However, you could run the two statements separately, and then use MORE to do the union.
--big sql-query first
--big sql-query second


Access form to run a query and display the results

I have a query in an MS Access database. How can I create a form that has a button to run the query and display the results in the same form, so that it looks more user friendly. (the result is only 5 records of a two column table)
I don't need a complete solution. Just some advice on the code for the button and the space to display the result.
Based on the answers, I guess don't I understand the question. It sounds like the OP has a DML query (or "action query" in Access terms) that modifies data and wants to display the results in a form. The current answers explain how to display the results, but not how to run the query.
So, here's an answer based on my interpretation of the question.
First, create a continuous or datasheet form that is bound to the results.
That's the easy part. The "hard" part is executing the SQL to do the updates the results of which you're going to display. You don't give any context for where you're launching this from, nor how you determine which particular records to update, so I'm going to give two fairly generic answers.
Method 1. create a macro with two parts:
the first command is OpenQuery and you'd supply the name of your saved query as the argument.
the second command is OpenForm that opens the form you created to display the results.
Now, I haven't supplied any method for executing the macro, but that's because you didn't supply any context.
Method 2. on a form from which it is appropriate to initiate this process:
create a comand button.
use the OnClick event to perform the desired action.
a. use the macro you wrote with Method 1 as the argument for the OnClick event of the command button.
b. write VBA code to do both tasks:
CurrentDB.Execute "MySaveQueryThatUpdatesData", dbFailOneError
DoCmd.OpenForm "MyFormThatDisplaysTheResults"
But this is all really begging the questions, as this is all pretty darned elementary. The hard part of this kind of thing comes about when your SQL update is operating on a subset of records and you need to display only that subset of records.
It is very likely that your original query will be keyed to the original context. Say, for instance, that you want to launch the entire process from a form that displays Companies and your SQL operates on the Employees of the currently displayed Company record. In that case, you'd want an update of the Employees table limited to the Company you're currently viewing. There are two ways to do that:
use a reference to the CompanyID in the Company form in the WHERE clause of your saved QueryDef:
UPDATE Employees
SET [blah, blah, blah]
WHERE Employees.CompanyID = Forms!Company!CompanyID
instead of using a saved QueryDef hardwired to require that your Company form be open for it to work, write the SQL on the fly in the code behind your command button:
Dim strSQL As String
strSQL = "UPDATE Employees "
strSQL = strSQL & "SET [blah, blah, blah] "
strSQL = strSQL & "WHERE Employees.CompanyID = "
strSQL = strSQL & Me!CompanyID
CurrentDB.Execute strSQL, dbFailOneError
Now, for the second part of it, you need to open the results form to display just those records that have been updated. That means you want the form opened with the same WHERE clause as was used for the update. There are two methods for this, too.
the first is very much like the the first method for performing the update, i.e., hardwiring the reference to the Company form in the WHERE clause of your results form's Recordsource's WHERE clause. So, the Recordsource for your results form would look like this:
SELECT Employees.*
FROM Employees
WHERE Employees.CompanyID = Forms!Company!CompanyID
Then you'd open the results form the same way as originally stated:
DoCmd.OpenForm "MyFormThatDisplaysTheResults"
the second approach avoids hardwiring the Recordsource of your results form to require the Company form be open, and instead, you just supply the WHERE clause (without the WHERE keyword) in the appropriate parameter of the OpenForm command:
DoCmd.OpenForm "MyFormThatDisplaysTheResults", , , "[CompanyID] = " & Me!CompanyID
Learning to do this is one of the most powerful and easy aspects of using Access, since you can create a form that returns all the records in a table, and then open that form and display subsets of data by supplying the appropriate WHERE parameter in the OpenForm command. Keep in mind that Access applies these very efficiently, that is, it doesn't open the form and load the entire recordset and then apply the WHERE argument to it, but applies the WHERE parameter to the recordsource before any records are loaded in the form.
Now, a consideration of what is the best way out of all the alternatives:
I would write the SQL on the fly for the update and use the WHERE parameter of the OpenForm command to do the filtering. So, in one of my apps, the code behind the OnClick event of your command button on the Company form would look like this:
Dim strSQL As String
strSQL = "UPDATE Employees "
strSQL = strSQL & "SET [blah, blah, blah] "
strSQL = strSQL & "WHERE Employees.CompanyID = "
strSQL = strSQL & Me!CompanyID
CurrentDB.Execute strSQL, dbFailOneError
DoCmd.OpenForm "MyFormThatDisplaysTheResults", , , "[CompanyID] = " & Me!CompanyID
Now, because of the dbFailOnError argument for CurrentDB.Execute, you'd need an error handler. And if you want to know how many records where changed, you'd need to use a database object other than CurrentDB, so more likely, I'd do it like this:
On Error GoTo errHandler
Dim strSQL As String
Dim db As DAO.Database
strSQL = "UPDATE Employees "
strSQL = strSQL & "SET [blah, blah, blah] "
strSQL = strSQL & "WHERE Employees.CompanyID = "
strSQL = strSQL & Me!CompanyID
Set db = CurrentDB
db.Execute strSQL, dbFailOneError
Debug.Print "Updated " & db.RecordsAffect & " Employee records."
DoCmd.OpenForm "MyFormThatDisplaysTheResults", , , "[CompanyID] = " & Me!CompanyID
Set db = Nothing
Exit Sub
MsgBox Err.Number & ": " & Err.Description, _
vbExclamation, "Error in Forms!Company!cmdMyButton.OnClick()"
Resume exitRoutine
My reason for constructing the SQL on the fly in the command button's OnClick event is so that it's very easy to add in more criteria should they become necessary. I like to avoid overloading my saved QueryDefs with dependencies on UI objects, so I will tend to write SQL like this on the fly in the place where it is being used.
Some people worry that this degrades performance because on-the-fly SQL is not optimized by your database engine's query optimizer. This may or may not be true. Many server database engines cache optimization plans of on-the-fly SQL commands, and because of the way Jet/ACE parses a SQL command like this and hands it off to the server, it is likely to be sent as a generic stored procedure. Because of that, a server like SQL Server will cache that query plan and be able to re-use it each time you execute the on-the-fly SQL, even if each time it has a different CompanyID value.
With a Jet/ACE back end, there is no caching like this, but the difference in execution time between the optimized and unoptimized SQL is going to be very small in all cases where you're not operating on really large datasets. And even updating, say, 1000 employee records is not something that counts as a large dataset for Jet/ACE. So I think there is seldom enough of performance hit from writing SQL on the fly to justify moving it to a saved QueryDef. However, on a case-by-case basis, I might very well choose to do so -- it would just not be my first choice.
The more significant objection, though, is that you'll have a bunch of SQL strings littered throughout your code, and this can become a maintenance nightmare. I don't know what to say about that, except that there are ways to handle that such that you eliminate as much duplication as possible, either by saving a base SELECT query as a saved QueryDef and using that such that the SQL you construct in code is unique only the parts specific to the action being taken in that particular case, or by using defined constants in your code that hold the base SQL statements that you use (such that you only have to change the definition of the constant to change the results anywhere it is used).
That's fairly weak, but with Access, I don't see any alternative. If you save every SQL statement as a QueryDef you end up with a different kind of unmanageable mess with too many saved queries, each slightly different from the other, and it can be just as duplicative as SQL repeated in code.
But that's another issue, and I probably shouldn't make this any longer by trying to resolve it here!
I suggest using a subform with a continuous form to display the results. I guess the query is a select query of some description, to the record source of the subform can be set to the sql string:
strSQL="SELECT ID, Description, Count(SomeVal) " _
& "FROM Table " _
& "GROUP BY ID, Description " _
& "HAVING SomeVal=" & Me.txtSomeVal
Me.[Subform Control Name].Form.RecordSource = strSQL

VB6 ADO Set rs=command.Execute

Set rs = command.Execute
if not rs.EOF then
'logic here
end if
The above code fails at line 2, because rs is closed(probably the oledb provider figured out, wrongly, that it is an insert command so no need to return anything). The command is an insert statement something like:
Insert into log(Name,Value) values("xxx", 123); select scope_identity()
I need the identity back from the server in one roundtrip. Any ideas?
PS: Updated insert statement with field names(thanks Eduardo), but that is not the problem.
Your problem is that the data source is returning two recordsets. This is due to the NOCOUNT setting of the database. The first recordset is used just to return the records affected from the insert statement. THe second recordset returns the results of the SELECT SCOPE_IDENTITY call. So you need to do:
If rs.State = adStateClosed Then
Set rs = rs.NextRecordset()
End If
Then you can check EOF and all that.
But I usally avoid all this and put stuff like this in a stored procedure. Then I SET NOCOUNT ON in the stored procedure. And return the id at the end of the stored procedure. Right now your inserts might just be simple like that but the logic could grow. By putting it in a stored procedure you can just change the stored procedure and not have to change your compiled VB app. It also isolates the database code a bit.
What you don't want to do is SET NOCOUNT ON in your statement there. That I think affects the whole connection if not the whole database.
Select ##Identity From TableName should give the last inserted id
Is this SQL Server? Try adding SET NOCOUNT ON at the start of your SQL e.g.
Insert into log(Name,Value) values("xxx", 123);
select scope_identity()
You could try returning the ID in a function.
CREATE FUNCTION dbo.MyFunc ( #name varchar(10), #value varchar(10))
DECLARE #rtn int
Insert into log(Name,Value)
values("xxx", 123);
select #rtn = scope_identity()
RETURN #rtn;
Although I think what you have above, where you do an insert and then a select should work in some way.

Pentaho PDI get SQL SUM() with conditions

I'm using Pentaho PDI 7.1. I'm trying to convert data from Mysql to Mysql changing the structure of data.
I'm reading the source table (customers) and for each row I've to run another query to calculate the balance.
I was trying to use Database value lookup to accomplish it but maybe is not the best way.
I've to run a query like this to get the balance:
CASE WHEN direzione='ENTRATA' THEN -importo ELSE +importo END
FROM Movimento WHERE contoFidelizzato_id = ?
I should set the parameter taking it from the previous step. Some advice?
The Database lookup value may be a good idea, especially if you are used to database reasoning, but it may result in many queries which may not be the most efficient.
A more PDI-ish style would be to make the query like:
SELECT contoFidelizzato_id
, SUM(CASE WHEN direzione='ENTRATA' THEN -importo ELSE +importo END)
FROM Movimento
GROUP BY contoFidelizzato_id
and use it as the info source of a Lookup Stream Step, like this:
An even more PDI-ish style would be to divert the source table (customer) in two flows : one in which you keep the source rows, and one that you group by contoFidelizzato_id. Of course, you need a formula, or a Javascript, or to put a formula in the SQL of the Table input to change the sign when needed.
Test to know which strategy is better in your case. You'll soon discover that the PDI is very good at handling large data.

why the execute sql script is not working in kettle

I am using kettle to get data from one table (t1) and joining execute sql script tool (t2) and then making an insert/update in the same table (t1)
Here's my transform
table input tool
select stud_id,mark from student;
execute sql Script
select s.stud_id,ifnull(m.mark,0) as mark from mark as m inner join student as s on (s.stud_id=m.stud_id) where s.student_id='?'
fields: stud_id
insert/update tool
table: student
check: stud_id=stud_id
update: mark=mark
When I run this, it's successfully running but the values are not inserted or updated in target table.
If the Execute SQL Script step is the one I think your referring to, it doesn't generate output. It's for forming up a bunch of SQL statements in your transform and running them individually. I'm not in front of PDI right now, but I believe the way to run a dynamic SQL statement and add its output to your data flow is the Execute Dynamic SQL step.
Never the less, in your case I would use a Database Join step instead. This step prepares the statement and just re-executes the query plan for each row that arrives in the transform, substituting data into parameter markers. Much more performance friendly.

How to cast a Javascript String to SQL LongText

I am facing a problem for an hour or so with a LongText column in SQL.
I am currently working on a web project where at some point we have a search form which allows user to search through numerous fields. Due to the customer's users being forced to use IE 9, we had to create a javascript... script, that parse the whole form and only takes what's needed in order not to exceed the 2k characters limit IE has.
So far so good, the script was working fine until we added a textarea field which represents a LongText column in our DB. The problem seems to come from the fact SQL won't take a simple String in an SQL statement as a LongText.
Here is what the query looks like:
and CMF_TYPE_CASE_FK like '%LC_130125_074927_000001_60%'
But I get this error:
EJBException:; nested exception is: javax.ejb.EJBException: The data types ntext
and varchar are incompatible in the equal to operator.S_EXCP(204); nested
exception is: javax.ejb.EJBException: The data types ntext and varchar
are incompatible in the equal to operator.S_EXCP(204)
We are using CASE360 which is not so known but still doesn't matter much, we are building the "where clause" of the SQL statement through javascript and then send it to the CASE360 processor which will take this all and execute the query. This part works fine, it's just the LongText column which is giving me a hard time.
If you guys have any idea what this SQL Query should look like to be successfully interpreted then I'd be infinitely happy!
Thank you for your help in advance.
You can try using a CAST on the column which is ntext. It's not clear to me which column that would be from the error above. As an example, let's assume it's the CPR_DESCRIPTION column:
and CAST(CPR_DESCRIPTION as varchar(2000)) ='rererere'
and CMF_TYPE_CASE_FK like '%LC_130125_074927_000001_60%'
Keep in mind that you need to pick an appropriately large varchar size for the cast, and that even with the maximum size there is a potential for data loss.