- Remove From My Forums
-
Question
-
I get the following error when trying to use the SQL Bulk Load object.
«Error connecting to the data source.» The datasource is used right above this code to successfully read from the database. It is the last line that blows up.
Here is the problem code:
connStr = «provider=SQLNCLI;Data Source=myserver;Initial Catalog=mydb;Integrated Security=True»
Dim objBL As New SQLXMLBULKLOADLib.SQLXMLBulkLoad
objBL.ConnectionString = connStr
objBL.BulkLoad = True
objBL.XMLFragment = True
objBL.KeepIdentity = False
objBL.ErrorLogFile = «C:BulkLoadErrors.xml»
objBL.Execute(SchemaFile, datafile)
(SchemaFile and datafile are strings containing the full file name and path)
Answers
-
Use the connection string as follows :
ConnStr = «provider=sqloledb;data source=myserver;database=mydb;integrated security=SSPI;»
It should work.
Thanks.
Naras.
- Remove From My Forums
-
Question
-
Hi,
I’m trying to test a SQL XML Bulk Load routine with SQL Server 2008 R2 installed on Server 2008 R2. I’m receiving the below error when running my script.
The code in my vbs script is as follows (with personal details altered).
Set objBL = CreateObject("SQLXMLBulkLoad.SQLXMLBulkLoad.4.0") objBL.ConnectionString = "provider=SQLOLEDB.1;data source=servernameinstance;database=MyDatabase;User id=domainusername;Password=PASSWORD" objBL.ErrorLogFile = "c:error.log" objBL.Execute "c:customermapping.xml", "c:customers.xml" Set objBL = NothingI thought I had the connection string right, could anybody comment on why I might be receiving this error?
Many Thanks,
James
James Bratley
Answers
-
«User ID» and «Password» are intended for SQL Login authentication, not for Windows Authentication.
Here’s the proper connection string for Windows Authentication
Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=YourDB;Data Source=servernameinstance
Sebastian Sajaroff Senior DBA Pharmacies Jean Coutu
-
Marked as answer by
Friday, December 7, 2012 4:04 PM
-
Marked as answer by
I am trying to connect to Datasource but getting this error:
An error occurred during local report processing.
An error has occurred during report processing.
Cannot create a connection to data source 'PO'.
You have specified integrated security or credentials in the connection string for the data source, but the data source is configured to use a different credential type. To use the values in the connection string, you must configure the unattended report processing account for the report server.
Any idea how to resolve this?
Thanks
Aruna
asked Jan 13, 2017 at 11:05
Aruna RaghunamAruna Raghunam
8597 gold badges21 silver badges43 bronze badges
Looks like it forgot your password,
If it is an embedded data source > Right click on the Data source then go to properties > credentials.
If it is a shared data source go to the shared data sources, right click on the shared data source and hit Open, once in there click on credentials and enter the credentials again.
answered Jan 13, 2017 at 13:54
I was having this same issue. For me, the problem was that I was using a DNS alias as the server name in the connection string. Once I changed that to the actual machine name my connection was solid.
answered Dec 19, 2019 at 15:17
I guess this issue occurred in Visual Studio when you connected to data source but were not able to create a report using that data source. What credential did you use when creating database. Windows authentication or username and password. Right click on data source in VS and select credential as same as your server/db in properties. Mine is windows auth for my server/db and selected the same for data source. so now I am able to create reports.
answered Feb 2, 2022 at 19:53
18.05.12 — 09:10
Раньше никогда не занималась такой проблемой.
Надо загрузить данные из xml-файла в таблицу БД на SQL Server 2008. Написала процедуру. Ошибка «Error connecting to the data source» в последней строке.
SQLXMLBULKLOADLib.SQLXMLBulkLoad4Class bl =
new SQLXMLBULKLOADLib.SQLXMLBulkLoad4Class();
bl.ConnectionString = «Provider=SQLOLEDB; Data Source=АА-SQL; Initial Catalog=DVP; User ID=РРРР\хххх; Password=456456456»;
bl.SchemaGen = true;
bl.SGDropTables = true;
bl.KeepNulls = true;
bl.Transaction = true;
//bl.ErrorLogFile = «R:\апап\вава\укук\ук\XMLDocForBulkLoad.err»;
Object vDataFile = «R:\апап\вава\укук\ук\AS_SOCRBASE_20120307_c6125d29-dbfe-49bb-bb19-3c7f58a6589a.xml»;
bl.Execute(«R:\апап\вава\укук\ук\AS_SOCRBASE_2_250_06_04_01_01.xsd», vDataFile);
Что я сделала не так?
1 — 18.05.12 — 09:16
>> SQLXMLBULKLOADLib.SQLXMLBulkLoad4Class
>> Пол: Женский
О_О
а по теме — конвертни в текст с разделителями и
BULK INSERT [BaseName].[dbo].[TableName]
FROM ‘filename.txt’
WITH(DATAFILETYPE=’char’,FIELDTERMINATOR=’символ-разделитель полей’)
2 — 18.05.12 — 09:32
(1) Вот это вот О_О не поняла.
Как конвертировать?
3 — 18.05.12 — 09:36
(2) как загружаешь из xml?
<Как конвертировать?
excel вроде умеет.
4 — 18.05.12 — 09:41
(3) Файл-xml получен из интернета, файлов много, некоторые до 4 гб. Excel такие большие не открывает.
5 — 18.05.12 — 09:48
что за изврат, xml файлы по 4 гб? Или вы там весь интернет индексируете и выгружаете?
6 — 18.05.12 — 09:49
(5) это ФИАС, мы не причем
7 — 18.05.12 — 09:56
8 — 18.05.12 — 11:17
ошибку из (0) я победила, теперь другая ошибка: Schema: unable to load schema ‘AS_SOCRBASE_2_250_06_04_01_01.xsd’.
Но схему-то не я придумала, она должна быть правильная, ее на официальном сайте ФИАС выложили.
9 — 18.05.12 — 12:39
тема актуальна
10 — 18.05.12 — 12:42
11 — 18.05.12 — 13:20
(10) ага, значит, чтобы работал SQLXMLBULKLOADLib.SQLXMLBulkLoad4Class, нужна схема, сделанная специально для этого SQLXMLBulkLoad, а чтобы сделать такую схему, нужно сначала ее создать из таблицы в бд на sql server, положив в эту таблицу сначала данные, которые у меня есть только в виде файлов xml. ((( засада
12 — 18.05.12 — 13:21
(11) именно шулай, или возми одну строку с названиями полей создай в экселе, импортируй в базу с созданием таблицы, а потом выгрузи схему
13 — 18.05.12 — 13:29
(12) попробую, спасибо, только у меня не все файлы xml открываются.
14 — 21.05.12 — 16:13
Мда, после допила схемы получилось загрузить небольшой файл с помощью SQLXMLBulkLoad4Class.
15 — 21.05.12 — 16:17
declare @doc XML
declare @idoc int
select @doc= (SELECT top 1 BulkColumn FROM OPENROWSET(BULK ‘{filename.xml}’,SINGLE_BLOB) AS x)
exec sp_xml_preparedocument @idoc OUTPUT,@doc
SELECT * FROM OPENXML (@idoc,'{XPath}’) WITH( {Поле} {тип} {XPath})
16 — 21.05.12 — 16:22
17 — 21.05.12 — 16:23
18 — 21.05.12 — 17:37
апну
19 — 22.05.12 — 01:38
(0) Автор, тебе (16) и (17) помогло или дальше фигней страдать будешь?
Триша
20 — 22.05.12 — 08:39
(19) пока еще не пробовала. Но почему так категорично «фигней страдать»! Я изучаю разные возможности. С SQLXMLBulkLoad4Class ведь получилось. Можно теперь посмотреть другие варианты, сравнить.
what should i do
i just update promtail-local-config.yaml and restart promtail service , it works not normal .but before update,it work normal
Hi , i had similar issue , it appears to be the latest image tag , i.e image: master — always seems to break Loki datasource .
Image:
tag: master-ffe1093
Last update: 3 days ago and it works
Have a same issue, nothing help for me. Pulled last images loki and promtail.
In the last messages of Promtail there is not a word about the established connection or failure
promtail_1 | level=info ts=2019-02-09T17:27:45.378030372Z caller=main.go:47 msg=»Starting Promtail» version=»(version=master-58d2d21, branch=master, revision=58d2d21)»
promtail_1 | level=info ts=2019-02-09T17:27:50.37733498Z caller=filetargetmanager.go:165 msg=»Adding target» key=»{job=»varlogs»}»
promtail_1 | level=info ts=2019-02-09T17:27:50.377931998Z caller=filetarget.go:269 msg=»start tailing file» path=/var/log/backupninja.log
promtail_1 | level=info ts=2019-02-09T17:27:50.3782893Z caller=filetarget.go:269 msg=»start tailing file» path=/var/log/boot.log
promtail_1 | level=info ts=2019-02-09T17:27:50.378583606Z caller=filetarget.go:269 msg=»start tailing file» path=/var/log/maillog
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/backupninja.log — &{Offset:18304 Whence:0}
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/lastlog — &{Offset:0 Whence:0}
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/boot.log — &{Offset:95 Whence:0}
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/tallylog — &{Offset:0 Whence:0}
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/maillog — &{Offset:0 Whence:0}
promtail_1 | level=info ts=2019-02-09T17:27:50.379321674Z caller=filetarget.go:269 msg=»start tailing file» path=/var/log/tallylog
promtail_1 | level=info ts=2019-02-09T17:27:50.381192173Z caller=filetarget.go:269 msg=»start tailing file» path=/var/log/lastlog
promtail_1 | level=info ts=2019-02-09T17:27:50.381746355Z caller=filetarget.go:269 msg=»start tailing file» path=/var/log/test.log
promtail_1 | level=info ts=2019-02-09T17:27:50.381849633Z caller=filetarget.go:269 msg=»start tailing file» path=/var/log/yum.log
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/yum.log — &{Offset:22494 Whence:0}
promtail_1 | 2019/02/09 17:27:50 Seeked /var/log/test.log — &{Offset:5 Whence:0}
@wilful share the log of loki please. seems that promail part is normal.
No message is displayed on the Loki side when Promtail starts.
loki_1 | level=info ts=2019-02-11T06:03:35.982717042Z caller=loki.go:122 msg=initialising module=server
loki_1 | level=info ts=2019-02-11T06:03:35.983179243Z caller=gokit.go:36 http=[::]:3100 grpc=[::]:9095 msg="server listening on addresses"
loki_1 | level=info ts=2019-02-11T06:03:35.983718039Z caller=loki.go:122 msg=initialising module=overrides
loki_1 | level=info ts=2019-02-11T06:03:35.983750208Z caller=override.go:33 msg="per-tenant overides disabled"
loki_1 | level=info ts=2019-02-11T06:03:35.983787937Z caller=loki.go:122 msg=initialising module=store
loki_1 | level=info ts=2019-02-11T06:03:35.985754972Z caller=loki.go:122 msg=initialising module=ingester
loki_1 | level=info ts=2019-02-11T06:03:35.987309309Z caller=lifecycler.go:358 msg="entry not found in ring, adding with no tokens"
loki_1 | level=info ts=2019-02-11T06:03:35.987747435Z caller=lifecycler.go:288 msg="auto-joining cluster after timeout"
loki_1 | level=info ts=2019-02-11T06:03:36.004831721Z caller=loki.go:122 msg=initialising module=ring
loki_1 | level=info ts=2019-02-11T06:03:36.005003778Z caller=loki.go:122 msg=initialising module=querier
loki_1 | level=info ts=2019-02-11T06:03:36.005738165Z caller=loki.go:122 msg=initialising module=distributor
loki_1 | level=info ts=2019-02-11T06:03:36.005817828Z caller=loki.go:122 msg=initialising module=all
loki_1 | level=info ts=2019-02-11T06:03:36.005849317Z caller=main.go:45 msg="Starting Loki" version="(version=master-58d2d21, branch=master, revision=58d2d21)"
docker-compose exec promtail sh -c 'cat /etc/promtail/docker-config.yaml'
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
client:
url: http://loki:3100/api/prom/push
scrape_configs:
- job_name: system
entry_parser: raw
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*log
docker-compose exec promtail sh -c 'ping loki'
PING loki (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.129 ms
I’m having same issue.
here is my complete setup: https://github.com/zajca/docker-server-explore
loki ip is resolved fine docker-compose -f ... exec grafana sh -c 'getent hosts loki' same with loki ip from promtail. Yet I’m getting error Data source connected, but no labels received. Verify that Loki and Promtail is configured properly.
If I access loki ip on port 3100 and url /metrics data are there.
Hello,
I have this problem too
Iam started my prom + loki + grafana stack and some time all is fine (about 15-20 minutes)
After that my «log line» — stop to refresh and make a double events
After 20-30 min — i have this Error:
Error connecting to datasource: Data source connected, but no labels received. Verify that Loki and Promtail is configured properly.
After that iam restarted prom container and all is fine
After 2 weeks of work, the problem returned. Have no idea what happened. UP. Nothing in logs.
Hi all.
You must mount the volume with logs describe in /etc/promtail/docker-config.yaml:
job: varlogs
__path__: /var/log/*log
like this:
promtail:
image: grafana/promtail:master
container_name: promtail
volumes:
- /var/log:/var/log:ro
Check it!
I seem to be hitting the same problem, to me the problems seems to be that if nothing has been logged for some time (maybe about 20 minutes) then all the logs disappear: Grafana shows the «no label received» error, and trying some API calls such as GET /api/prom/label return nothing when there where some data a few minutes earlier.
I am sending logs to Loki using both Promtail (installed directly as a binary in virtual machines), and direct push (POST /api/prom/push) from some applications.
Edit: I think I should add that if new log lines get pushed, all the logs reappear including what was sent previously, so the data isn’t actually lost.
Ref to #430
Look like only label data was lost
This error also happened with Loki Cloud. (UserID: 2315)
Your first stop when you see this issue is the troubleshooting guide.
If you’re testing things on your laptop, and restart loki or promtail often, you’ll face the problem that your low volume logs are already consumed before loki was ready to receive (start promtail a bit after loki), or promtail already pushed all logs (delete positions file to force a new push), or loki did not have time to flush what if indexed before you restarted it (need to push it again, probably by deleting positions file). We’re still working on making this single-binary use case a bit smoother.
It’s worth noting that these issues wont affect production use of Loki: once it’s running and replicated it can handle restarts without data loss. And if your apps keep producing logs, there will be labels.
Closing this issue, seems to be related to some instability in the earlier versions of promtail/loki (which should hopefully be gone now) and miss-config (which there should hopefully be better docs and support for now)
I’m still seeing this type of issue on a fresh install (using loki-stack chart v0.16.5). Sometimes the labels disappear, sometimes the logs disappear as well. Seems sporadic, didn’t see anything in the logs to explain it.
still seeing the same issue with loki/promtail v0.3.0
same issue here #1173
Is there any config can last loki labels for long time?
Seeing issue also in loki/promtail v0.4.0
On Friday I got this:
curl -G -s "http://someserver:3100/loki/api/v1/query" --data-urlencode 'query=sum(rate({job="varlogs"}[10m])) by (level)'
{"status":"success","data":{"resultType":"vector","result":[{"metric":{},"value":[1572013966.797,"48.016666666666666"]}]}}
On Monday I got this:
curl -G -s "http://someserver:3100/loki/api/v1/query" --data-urlencode 'query=sum(rate({job="varlogs"}[10m])) by (level)'
{"status":"success","data":{"resultType":"vector","result":[]}}
Grafana says: «Error connecting to datasource: Data source connected, but no labels received. Verify that Loki and Promtail is configured properly.»
No restarts happened.
Service config from compose file:
loki:
image: grafana/loki:v0.4.0
volumes:
- ./config/loki/local-config.yaml:/etc/loki/local-config.yaml
- ./data/loki:/tmp/loki/
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
promtail:
image: grafana/promtail:v0.4.0
volumes:
- /var/log:/var/log
command: -config.file=/etc/promtail/docker-config.yaml
loki local-config.yaml:
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
max_transfer_retries: 1
schema_config:
configs:
- from: 2018-04-15
store: boltdb
object_store: filesystem
schema: v9
index:
prefix: index_
period: 168h
storage_config:
boltdb:
directory: /tmp/loki/index
filesystem:
directory: /tmp/loki/chunks
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
chunk_store_config:
max_look_back_period: 0
table_manager:
chunk_tables_provisioning:
inactive_read_throughput: 0
inactive_write_throughput: 0
provisioned_read_throughput: 0
provisioned_write_throughput: 0
index_tables_provisioning:
inactive_read_throughput: 0
inactive_write_throughput: 0
provisioned_read_throughput: 0
provisioned_write_throughput: 0
retention_deletes_enabled: false
retention_period: 0
When that happens can you use logcli and query for recent logs ? It looks like a problem with docker compose or the local config, let’s move the discussion to the other issue #1173 Please give as much details, logs, config , how you send logs, etc…
I seem to be hitting the same problem, to me the problems seems to be that if nothing has been logged for some time (maybe about 20 minutes) then all the logs disappear: Grafana shows the «no label received» error, and trying some API calls such as
GET /api/prom/labelreturn nothing when there where some data a few minutes earlier.
@leops I have the same problem with the latest docker images of loki and promtail on a swarm.
curl -G -s "http://<host>:3100/loki/api/v1/label" | jq .
Returns {} when there have not been any new logs for some time and the correct labels if there are new logs (for 5 minutes or so).
I m hitting the same problem in loki+promtail v1.6.1, After replacing with loki+promtail v1.6.0 ,it disappeared
I use Grafana v7.2.1 (72a6c64532)
For the posteriority: What fixed it for me was to make sure I had tenant_id set as tenant1 in the promtail config file:
clients:
- url: http://REDACTED:3100/loki/api/v1/push
tenant_id: tenant1
Error connecting to datasource
Error connecting to datasource
Hello,
We were using dbeaver without any problems with our db2 z/os database.
Our DBA has migrated the database and now we got an error message when trying to connect to the database (sorry it is in french) :
SQL Error: [ibm][db2][jcc][10174][10603] Le nom du module de chargement pour la procédure mémorisée, {0}, est introuvable sur le serveur. Contactez votre DBA.
SQL Error [42724]: DB2 SQL error: SQLCODE: -444, SQLSTATE: 42724, SQLERRMC: DSNATBLS
[ibm][db2][jcc][10174][10603] Le nom du module de chargement pour la procédure mémorisée, {0}, est introuvable sur le serveur. Contactez votre DBA.
We cannot execute any query, I think some metadata is loaded on startup (tablespaces ?) that we do not have access anymore but dbeaver should connect anyway and allow to execute queries.
With squirrelSQL, it is working fine.
Regards,
Brad
- bradNull
- Posts: 8
- Joined: Tue Mar 04, 2014 12:51 pm
Re: Error connecting to datasource
by Serge » Wed Oct 01, 2014 7:38 am
Hello,
Please send complete error stacktrace. You can copy it from Error Log view (main menu Window -> Error Log).
Regards
- Serge
- Posts: 1526
- Joined: Sat Feb 26, 2011 8:24 pm
- Location: SPb
-
- Website
Re: Error connecting to datasource
by Serge » Wed Oct 01, 2014 9:29 am
What DBeaver version do you use? It doesn’t look like recent one.
Did you try version 3.0.1?
- Serge
- Posts: 1526
- Joined: Sat Feb 26, 2011 8:24 pm
- Location: SPb
-
- Website
Re: Error connecting to datasource
by bradNull » Wed Oct 01, 2014 11:26 am
Yes i am using 3.0.1
- bradNull
- Posts: 8
- Joined: Tue Mar 04, 2014 12:51 pm
Re: Error connecting to datasource
by titou10 » Wed Oct 01, 2014 11:32 am
- titou10
- Posts: 37
- Joined: Fri Aug 30, 2013 1:52 am
Re: Error connecting to datasource
by bradNull » Wed Oct 01, 2014 4:17 pm
We have db2 v10.1 jdbc, the driver has been updated also.
I think it is a bug in the method at com.ibm.db2.jcc.am.DatabaseMetaData.getTables(DatabaseMetaData.java:6238)
Dbeaver trying to load the list of tables with this method.
I think Dbeaver should allow to connect (in degraded mode) and allow the usage of the SQL worksheet even if the table list is empty in the left menu.
As of today, it is blocking the user.
Regards,
- bradNull
- Posts: 8
- Joined: Tue Mar 04, 2014 12:51 pm
Re: Error connecting to datasource
by titou10 » Fri Oct 03, 2014 11:46 am
OK
Did you open a PMR by IBM?
Also, did you try to use the jdbc driver you previously used, to check is the problem comes from the latest jdbc driver?
Denis
- titou10
- Posts: 37
- Joined: Fri Aug 30, 2013 1:52 am
Re: Error connecting to datasource
by bradNull » Mon Oct 06, 2014 4:26 pm
Hello,
Thanks for your reply.
After upgrading to the latest driver, we have a new error message :
SQL Error [42724]: USER PROGRAM DSNATBLS COULD NOT BE FOUND. SQLCODE=-444, SQLSTATE=42724, DRIVER=4.18.60
[jcc][t4][10174][10603][4.18.60] Le nom du module de chargement pour la procédure mémorisée, SYSIBM.SQLTABLES, est introuvable sur le serveur. Contactez votre DBA. ERRORCODE=-4472, SQLSTATE=null
This error means that the catalog objects who are like SYSIBM.SQLXXX in DB2 10.1 are not accessible (the catalog is based on stored procedures, i don’t know why …).
Something has to be done in our database but i don’t know what, the DBA is looking…. I tried IBM Data Studio, it is working fine.
I am not using dbeaver anymore because i cannot connect.
Still i am repeating myself but i think DBeaver should allow to make SQL query even if there is this kind of error when loading the catalog (perhaps it is too much complicated).
For more information on this issue :
https://www-304.ibm.com/support/docview … wg21449630
Regards,
- bradNull
- Posts: 8
- Joined: Tue Mar 04, 2014 12:51 pm
Return to Support
Who is online
Users browsing this forum: No registered users and 29 guests
Last Modified Date: 24 Aug 2022
Issue
After a published workbook connected to a published data source has changed ownership, the following error may occur when non-admin users open the view:
Unable to connect to the data source.
Try connecting again. If the problem persists, disconnect from the data source and contact the data source owner.
DataServiceFailure
Unable to connect to the server «localhost». Check that the server is running and that you have access privileges to the request database.
Environment
- Tableau Cloud
- Tableau Server
Resolution
Republish the workbook with «Embedded Password» selected as the authentication method for the published data source.
Cause
Non-admin users who are denied permissions to connect to the published data source can only access the view if the workbook is published with Embedded Password authentication. This allows the viewer to use the publisher’s data source permissions to access the data, instead of their own.
If you change the ownership of a workbook or data source that has embedded credentials, the embedded credentials will be deleted. You can update the embedded credentials by editing the connection information on Tableau Cloud. For more information, see Edit Connections. Alternatively, you can download the workbook or data source, update the embedded credentials for the new owner, and then re-upload the workbook or data source.
Additional Information


