Quantcast
Channel: SQL Server Database Engine forum
Viewing all 10269 articles
Browse latest View live

Only db_owner can read encrypted columns using Decryption by asymmetric key

$
0
0
I setup column level encryption using Asymmetric/Asymmetric keys. It works as planned but, when it comes to read the data only db_owner can read the encrypted columns anybody else get null values.

Jean-Charles


Errors in sys.dm_cdc_errors

$
0
0

The cdc capture job ran into errors on our prod server. The error reporting in dm_cdc_errors is

Cannot insert duplicate key row in object 'cdc.CI_dbo_valuation_CT' 
WITH unique index 'CI_dbo_valuation_t_CT_clustered_idx'. 
The duplicate key value is (0x005e624e00063c60002c, 1, 0x005e624e00063740005c, 2).

The index definition is on the following columns. The following selects shows that there is already a row that was captured. 

SELECT * FROM cdc.CI_dbo_valuation_CT
WHERE [__$start_lsn] = 0x005e624e00063c60002c
AND [__$command_id] = 1
AND[__$seqval] = 0x005e624e00063740005c
AND[__$operation] = 2

My google search showed that there is a bug with sql2016 when Merge statement used with cdc, capture job fails. This was rectified in CU1. https://support.microsoft.com/en-us/help/3155503/fix-merge-statement-to-sync-tables-is-unsuccessful-when-change-data-ca

But we are running sql2016 CU11 and we are seeing this error. Restarting the CDC capture job fixed the error and capture job was able to move past the LSNs. I am trying to find out the root cause of this problem.

Any help would be great. 

SQL DB Migration-Capacity Planning: vCPU Core+Memory Recommendation

$
0
0

Hello Everyone,

  Hope everyone is staying safe & Healthy in SQL Community.

I am looking some advise if someone can suggest best of it. 

 Currently I have SQL Server Shared database Environment with over 250 DB. Current shared environment has 32 cores , 280 GB RAM.

I need to move 5 DB's to brand new  SQL Server 2017 and for that i need to come up with CORE + MEmory recommendation. 

Core count specially needed to save cost for the SQL License. 

what formula or process shall i use to recommend core count and Memory. 

Current CPU %

DB    CPU % 

DB1 76.54
DB2 16.36
DB3 16.29
DB4 10.80
DB5 10.00
DB6 8.00

Current Memory usage

Db    Cached Size (GB)
DB1 144
DB2 20
DB3 14
DB4 10
DB5 10
DB5 8

let me know if you need more information. 


Thank you very much for your time and effort to answer this post. Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach Best -Ankit

data masking products

$
0
0

Hi we are migrating to 2019 enterprise.  And believe MS's data masking feature will be of interest.

does anyone have experience comparing that feature with say Red Gate's or other 3rd party masking products? 

We do not think scrambling will be important.  and we do not think encryption will be important.  But i m running out those questions with our compliance people. 

Msg 5042, Level 16, State 12, Line 6 The filegroup '*****' cannot be removed because it is not empty.

$
0
0

Hi,

I am trying to delete file and file groups from database as they are not required. I was able to delete the file, however while deleting the filegroup I get the below message. I tried deleting the filegroup from SSMS and there also I get the below message.

Msg 5042, Level 16, State 12, Line 6 The filegroup '*****' cannot be removed because it is not empty.

I already tried below commands so that I can delete the filegroup

Alter partition function FUNC_PARTITION merge range ('N')
ALTER PARTITION SCHEME PARTITION-SCHEME-NAME NEXT USED [OTHER-FILEGROUP-NAME]
ALTER PARTITION SCHEME PARTITION-SCHEME-NAME NEXT USED



Swapnil Ambre

How not to have sql to restart on the same node but failover on 1st failure in a cluster

$
0
0

Hi, 

By default sql tries to restart on the same node in case of a failure, next it does failover.

Personally I am good with the default behavior , but I am being asked a interesting question - is it possible to have sql NOT try a restart on the same node and just failover in case of a failure?

And where is it done from, is it this:

In cluadmin-SQL Server Properties-Policies- 

There is a option

1. Max restarts in the specified period - Will making it zero do the needful?

Or there is some other way? Thanks.


D

HOW TO delete corrupt table in SQL 2012?

$
0
0

I find a corrupt table in my database.

ERROR: Possible schema corruption

an temporary table !

not important at all! really !

I tried dbcc checkdb and checktable with repair, butwithout success.


now, How the heeell can I delete the one corrupt table!

IT'S HURTING MY EYES!

SQL server 2014 cannot start service, seems 'model' transaction log full, please teach me how to fix, thanks

$
0
0
2020-04-07 13:37:52.08 Server      Microsoft SQL Server 2014 - 12.0.4100.1 (X64) 
Apr 20 2015 17:29:27 
Copyright (c) Microsoft Corporation
Standard Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)

2020-04-07 13:37:52.08 Server      UTC adjustment: 8:00
2020-04-07 13:37:52.08 Server      (c) Microsoft Corporation.
2020-04-07 13:37:52.08 Server      All rights reserved.
2020-04-07 13:37:52.08 Server      Server process ID is 2516.
2020-04-07 13:37:52.08 Server      System Manufacturer: 'Microsoft Corporation', System Model: 'Virtual Machine'.
2020-04-07 13:37:52.08 Server      Authentication mode is MIXED.
2020-04-07 13:37:52.08 Server      Logging SQL Server messages in file 'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Log\ERRORLOG'.
2020-04-07 13:37:52.08 Server      The service account is 'TOPPANFORMS\sqlsvc'. This is an informational message; no user action is required.
2020-04-07 13:37:52.08 Server      Registry startup parameters: 
-d C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\master.mdf
-e C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Log\ERRORLOG
-l C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\mastlog.ldf
2020-04-07 13:37:52.08 Server      Command Line Startup Parameters:
-s "MSSQLSERVER"
2020-04-07 13:37:52.39 Server      SQL Server detected 1 sockets with 4 cores per socket and 4 logical processors per socket, 4 total logical processors; using 4 logical processors based on SQL Server licensing. This is an informational message; no user action is required.
2020-04-07 13:37:52.39 Server      SQL Server is starting at normal priority base (=7). This is an informational message only. No user action is required.
2020-04-07 13:37:52.39 Server      Detected 16383 MB of RAM. This is an informational message; no user action is required.
2020-04-07 13:37:52.39 Server      Using conventional memory in the memory manager.
2020-04-07 13:37:52.47 Server      Machine supports memory error recovery. SQL memory protection is enabled to recover from memory corruption.
2020-04-07 13:37:52.50 Server      Default collation: SQL_Latin1_General_CP1_CI_AS (us_english 1033)
2020-04-07 13:37:52.55 Server      The maximum number of dedicated administrator connections for this instance is '1'
2020-04-07 13:37:52.55 Server      This instance of SQL Server last reported using a process ID of 4480 at 4/7/2020 1:36:55 PM (local) 4/7/2020 5:36:55 AM (UTC). This is an informational message only; no user action is required.
2020-04-07 13:37:52.55 Server      Node configuration: node 0: CPU mask: 0x000000000000000f:0 Active CPU mask: 0x000000000000000f:0. This message provides a description of the NUMA configuration for this computer. This is an informational message only. No user action is required.
2020-04-07 13:37:52.56 Server      Using dynamic lock allocation.  Initial allocation of 2500 Lock blocks and 5000 Lock Owner blocks per node.  This is an informational message only.  No user action is required.
2020-04-07 13:37:52.60 spid8s      Starting up database 'master'.
2020-04-07 13:37:52.67 Server      CLR version v4.0.30319 loaded.
2020-04-07 13:37:52.72 spid8s      1 transactions rolled forward in database 'master' (1:0). This is an informational message only. No user action is required.
2020-04-07 13:37:52.75 Server      Common language runtime (CLR) functionality initialized using CLR version v4.0.30319 from C:\Windows\Microsoft.NET\Framework64\v4.0.30319\.
2020-04-07 13:37:52.79 spid8s      0 transactions rolled back in database 'master' (1:0). This is an informational message only. No user action is required.
2020-04-07 13:37:52.91 spid8s      CHECKDB for database 'master' finished without errors on 2020-04-07 01:00:02.783 (local time). This is an informational message only; no user action is required.
2020-04-07 13:37:52.91 spid8s      SQL Server Audit is starting the audits. This is an informational message. No user action is required.
2020-04-07 13:37:52.91 spid8s      SQL Server Audit has started the audits. This is an informational message. No user action is required.
2020-04-07 13:37:52.95 spid8s      SQL Trace ID 1 was started by login "sa".
2020-04-07 13:37:52.96 spid8s      Server name is 'TFHKGSQLSVR'. This is an informational message only. No user action is required.
2020-04-07 13:37:53.02 spid13s     A self-generated certificate was successfully loaded for encryption.
2020-04-07 13:37:53.03 spid13s     Server is listening on [ 'any' <ipv6> 1433].
2020-04-07 13:37:53.03 spid13s     Server is listening on [ 'any' <ipv4> 1433].
2020-04-07 13:37:53.03 spid13s     Server local connection provider is ready to accept connection on [ \\.\pipe\SQLLocal\MSSQLSERVER ].
2020-04-07 13:37:53.03 spid13s     Server named pipe provider is ready to accept connection on [ \\.\pipe\sql\query ].
2020-04-07 13:37:53.03 Server      Server is listening on [ ::1 <ipv6> 1434].
2020-04-07 13:37:53.03 Server      Server is listening on [ 127.0.0.1 <ipv4> 1434].
2020-04-07 13:37:53.03 Server      Dedicated admin connection support was established for listening locally on port 1434.
2020-04-07 13:37:53.03 spid13s     SQL Server is now ready for client connections. This is an informational message; no user action is required.
2020-04-07 13:37:53.03 Server      SQL Server is attempting to register a Service Principal Name (SPN) for the SQL Server service. Kerberos authentication will not be possible until a SPN is registered for the SQL Server service. This is an informational message. No user action is required.
2020-04-07 13:37:53.07 Server      The SQL Server Network Interface library could not register the Service Principal Name (SPN) [ MSSQLSvc/TFHKGSQLSVR.toppanforms.com ] for the SQL Server service. Windows return code: 0x21c7, state: 15. Failure to register a SPN might cause integrated authentication to use NTLM instead of Kerberos. This is an informational message. Further action is only required if Kerberos authentication is required by authentication policies and if the SPN has not been manually registered.
2020-04-07 13:37:53.07 Server      The SQL Server Network Interface library could not register the Service Principal Name (SPN) [ MSSQLSvc/TFHKGSQLSVR.toppanforms.com:1433 ] for the SQL Server service. Windows return code: 0x21c7, state: 15. Failure to register a SPN might cause integrated authentication to use NTLM instead of Kerberos. This is an informational message. Further action is only required if Kerberos authentication is required by authentication policies and if the SPN has not been manually registered.
2020-04-07 13:37:53.14 spid14s     A new instance of the full-text filter daemon host process has been successfully started.
2020-04-07 13:37:53.19 spid18s     Starting up database 'FileDownload'.
2020-04-07 13:37:53.19 spid19s     Starting up database 'SharePoint_Config'.
2020-04-07 13:37:53.19 spid17s     Starting up database 'msdb'.
2020-04-07 13:37:53.19 spid21s     Starting up database 'SharePoint_AdminContent_3248fb2a-30de-4f63-bbe5-023ff58b54b1'.
2020-04-07 13:37:53.19 spid20s     Starting up database 'WSS_Content'.
2020-04-07 13:37:53.19 spid24s     Starting up database 'Search_Service_Application_AnalyticsReportingStoreDB_2d8bfb2060b14612a759e888acea6a73'.
2020-04-07 13:37:53.19 spid25s     Starting up database 'Search_Service_Application_LinksStoreDB_8d331ab5bde744fcbd2cda36368ab4df'.
2020-04-07 13:37:53.19 spid22s     Starting up database 'Search_Service_Application_DB_afcc8cbabc00428e869aa738e6788464'.
2020-04-07 13:37:53.19 spid26s     Starting up database 'Secure_Store_Service_DB_14a51c2cf4c44a4391bd87ec5f617d20'.
2020-04-07 13:37:53.20 spid23s     Starting up database 'Search_Service_Application_CrawlStoreDB_03e31aef1a714663918f7310cd0ab606'.
2020-04-07 13:37:53.20 spid27s     Starting up database 'StateService_01fe9be85c2648729e2c82808563939b'.
2020-04-07 13:37:53.20 spid9s      Starting up database 'mssqlsystemresource'.
2020-04-07 13:37:53.20 spid28s     Starting up database 'AppMng_Service_DB_a429bc6a363f4108980a2c075ef8403b'.
2020-04-07 13:37:53.20 spid29s     Starting up database 'WSS_Logging'.
2020-04-07 13:37:53.20 spid30s     Starting up database 'Bdc_Service_DB_558c823bf13b4ad6a620d4797fe23d17'.
2020-04-07 13:37:53.20 spid31s     Starting up database 'AppFabric'.
2020-04-07 13:37:53.20 spid32s     Starting up database 'WSS_SBPD'.
2020-04-07 13:37:53.23 spid9s      The resource database build version is 12.00.4100. This is an informational message only. No user action is required.
2020-04-07 13:37:53.27 spid27s     1 transactions rolled forward in database 'StateService_01fe9be85c2648729e2c82808563939b' (14:0). This is an informational message only. No user action is required.
2020-04-07 13:37:53.28 spid27s     0 transactions rolled back in database 'StateService_01fe9be85c2648729e2c82808563939b' (14:0). This is an informational message only. No user action is required.
2020-04-07 13:37:53.32 spid26s     1 transactions rolled forward in database 'Secure_Store_Service_DB_14a51c2cf4c44a4391bd87ec5f617d20' (13:0). This is an informational message only. No user action is required.
2020-04-07 13:37:53.34 spid31s     1 transactions rolled forward in database 'AppFabric' (18:0). This is an informational message only. No user action is required.
2020-04-07 13:37:53.36 spid24s     1 transactions rolled forward in database 'Search_Service_Application_AnalyticsReportingStoreDB_2d8bfb2060b14612a759e888acea6a73' (11:0). This is an informational message only. No user action is required.
2020-04-07 13:37:53.36 spid26s     0 transactions rolled back in database 'Secure_Store_Service_DB_14a51c2cf4c44a4391bd87ec5f617d20' (13:0). This is an informational message only. No user action is required.
2020-04-07 13:37:53.37 spid24s     0 transactions rolled back in database 'Search_Service_Application_AnalyticsReportingStoreDB_2d8bfb2060b14612a759e888acea6a73' (11:0). This is an informational message only. No user action is required.
2020-04-07 13:37:53.37 spid31s     0 transactions rolled back in database 'AppFabric' (18:0). This is an informational message only. No user action is required.
2020-04-07 13:37:53.43 spid30s     1 transactions rolled forward in database 'Bdc_Service_DB_558c823bf13b4ad6a620d4797fe23d17' (17:0). This is an informational message only. No user action is required.
2020-04-07 13:37:53.44 spid30s     0 transactions rolled back in database 'Bdc_Service_DB_558c823bf13b4ad6a620d4797fe23d17' (17:0). This is an informational message only. No user action is required.
2020-04-07 13:37:53.51 spid9s      Starting up database 'model'.
2020-04-07 13:37:53.53 spid28s     1 transactions rolled forward in database 'AppMng_Service_DB_a429bc6a363f4108980a2c075ef8403b' (15:0). This is an informational message only. No user action is required.
2020-04-07 13:37:53.57 spid28s     0 transactions rolled back in database 'AppMng_Service_DB_a429bc6a363f4108980a2c075ef8403b' (15:0). This is an informational message only. No user action is required.
2020-04-07 13:37:54.70 spid9s      Recovery of database 'model' (3) is 2% complete (approximately 9 seconds remain). Phase 1 of 3. This is an informational message only. No user action is required.
2020-04-07 13:37:54.75 spid9s      Recovery of database 'model' (3) is 2% complete (approximately 9 seconds remain). Phase 1 of 3. This is an informational message only. No user action is required.
2020-04-07 13:37:54.76 spid9s      3 transactions rolled forward in database 'model' (3:0). This is an informational message only. No user action is required.
2020-04-07 13:37:54.78 spid9s      0 transactions rolled back in database 'model' (3:0). This is an informational message only. No user action is required.
2020-04-07 13:37:54.91 spid9s      CHECKDB for database 'model' finished without errors on 2019-09-20 01:00:03.130 (local time). This is an informational message only; no user action is required.
2020-04-07 13:37:54.91 spid9s      Clearing tempdb database.
2020-04-07 13:37:54.93 spid9s      Error: 9002, Severity: 17, State: 2.
2020-04-07 13:37:54.93 spid9s      The transaction log for database 'model' is full due to 'LOG_BACKUP'.
2020-04-07 13:37:54.93 spid9s      Could not write a checkpoint record in database model because the log is out of space. Contact the database administrator to truncate the log or allocate more space to the database log files.
2020-04-07 13:37:54.93 spid9s      Error: 5901, Severity: 16, State: 1.
2020-04-07 13:37:54.93 spid9s      One or more recovery units belonging to database 'model' failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.
2020-04-07 13:37:54.93 spid9s      Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized.
2020-04-07 13:37:54.93 spid9s      SQL Server shutdown has been initiated
2020-04-07 13:37:54.93 spid9s      SQL Trace was stopped due to server shutdown. Trace ID = '1'. This is an informational message only; no user action is required.
2020-04-07 13:37:55.94 spid9s      Error: 25725, Severity: 16, State: 1.
2020-04-07 13:37:55.94 spid9s      An error occurred while trying to flush all running Extended Event sessions.  Some events may be lost.

how to add a data file to the primary database in Log Shipping without breaking log shipping

$
0
0

Good Morning Experts,

how to add a data file to the primary database in Log Shipping without breaking log shipping


Kiran

Setting of table ANSI_NULL

$
0
0

if there are table having

how to change the setting of this table ?

Kerberos not using

$
0
0
suppose the SPN of the sql server set properly . what other chances that the remote connection still not using KERBEROS ?

Problem with query in stored procedure

$
0
0

Running this way takes 40 seconds

                     

SET TRANSACTION ISOLATION LEVEL

READ UNCOMMITTED 
SET NOCOUNT ON


DECLARE @CurrentDate datetime, @minOrderId1 int, @minOrderId2 int

SET @CurrentDate = ags.dbo.uf_strip_time(GETDATE())
select @minOrderId1 = min(orderid) from epos_orders where xmitdate >= dateadd(dd,-2,@CurrentDate)  --back 2 days
select @minOrderId2 = min(orderid) from epos_orders where xmitdate >= dateadd(wk,-1,@CurrentDate)  --back 1 week

CREATE TABLE #dupes
            (   rowId       int identity,
                store       int,
                partition1  varchar(3),
                Conf1       int,
                XmitDate1   datetime,
                ItmCnt1     int,
                TotQty1     int,
                XferFileId1 int,
                partition2  varchar(3),
                Conf2       int,
                XmitDate2   datetime,
                ItmCnt2     int,
                TotQty2     int,
                XferFileId2 int
            )

----------------------------------------------------------------------------------------------------------
-- identify possible dupes within all files that were just imported (status='' or 'X')
INSERT #Dupes
   SELECT
               store        = t1.store,
               partition1   = t1.partition,
               Conf1        = t1.conf,
               XmitDate1    = t1.xmitdate,
               ItmCnt1      = t1.ItmCnt,
               TotQty1      = t1.TotQty,
               XferFileId1  = t1.xferfileId,
               partition2   = t2.partition,
               Conf2        = t2.conf,
               XmitDate2    = t2.xmitdate,
               ItmCnt2      = t2.ItmCnt,
               TotQty2      = t2.TotQty,
               XferFileId2  = t2.xferfileId

     FROM


           (    select  -- transmissions not sent to OMI yet
                          xferfileId, conf, eo.store, partition, xmitdate, ItmCnt= count(item), TotQty= sum(qty)
                    from  epos_orders eo
                   where  1=1
                          and orderID >= @minOrderId1
                          --status = '' means order just loaded; hasn't been thru edits yet
                          --status = 'X' means order passed edits; ready to be exported to OMI
                          and (status = '' or cast(status as varbinary(1)) = cast('X' as varbinary(1)) )
                          and storeOverride <> '99' --used specifically to prevent order from getting caught by this process
                          and eo.store not in (995, 1995, 2995, 3995) --cash n carry
                group by  xferfileId, conf, eo.store, partition, xmitdate
                  having  count(item) >= 5
            ) T1
INNER JOIN
           (    select  -- transmissions not sent to OMI yet
                          xferfileId, conf, eo.store, partition, xmitdate, ItmCnt= count(item), TotQty= sum(qty)
                    from  epos_orders eo
                   where  1=1
                          and orderID >= @minOrderId1
                          --status = '' means order just loaded; hasn't been thru edits yet
                          --status = 'X' means order passed edits; ready to be exported to OMI
                          and (status = '' or cast(status as varbinary(1)) = cast('X' as varbinary(1)) )
                          and storeOverride <> '99' --used specifically to prevent order from getting caught by this process
                          and eo.store not in (995, 1995, 2995, 3995) --cash n carry
                group by  xferfileId, conf, eo.store, partition, xmitdate
                having count(item) >= 5
            ) T2
        ON  T1.store = t2.store
            and T1.ItmCnt = T2.ItmCnt
            and T1.TotQty = T2.TotQty
     WHERE  --1st condition: duplicate is in same file, different partition
            (T1.xferFileId = T2.xferFileId
             and T1.partition <> T2.partition)
             OR
             --2nd condition: duplicate is different file, same partition
            (T1.xferFileId <> T2.xferFileId
             and T1.partition = T2.partition)
             OR
             --3rd condition: duplicate is different file, different partition
            (T1.xferFileId <> T2.xferFileId
             and T1.partition <> T2.partition)
   ORDER BY t1.xferFileId, t1.store, t1.conf, T1.partition

-- use for debugging
--select * from #dupes

delete #dupes
  from #dupes d1
 where exists (select *  from #dupes d2 
                where    --same file is in more than 1 row
                          (d1.XferFileId1 = d2.XferFileId1
                           and d1.partition2 = d2.partition1
                           and d2.rowId > d1.rowId)
                       OR --extra rows resulting from cartesian product
                          (d1.XferFileId1 = d2.XferFileId2
                           and d1.partition1 = d2.partition2
                           and d2.rowId > d1.rowId)
                      )
-- use for debugging
--select * from #dupes                      
----------------------------------------------------------------------------------------------------------


----------------------------------------------------------------------------------------------------------
--identify possible dupes against files that have already been processed
INSERT #Dupes
   SELECT
               store        = TODAY.store,
               partition1  =  TODAY.partition,
               Conf1        = TODAY.conf,
               XmitDate1    = TODAY.xmitdate,
               ItmCnt1      = TODAY.ItmCnt,
               TotQty1      = TODAY.TotQty,
               XferFileId1  = TODAY.xferFileId,
               partition2   = PASTWEEK.partition,
               Conf2        = PASTWEEK.conf,
               XmitDate2    = PASTWEEK.xmitdate,
               ItmCnt2      = PASTWEEK.ItmCnt,
               TotQty2      = PASTWEEK.TotQty,
               XferFileId2  = PASTWEEK.xferFileId

     FROM


              (   select  -- transmissions not sent to OMI yet
                          conf, eo.store, partition, xmitdate, xferFileId, ItmCnt= count(item), 
                          TotQty= sum(eo.qty + isnull(gbx.qty,0) + isnull(gbr.qty,0) )
                    from  epos_orders eo
               left join OrderManagement..GroupBookingXref gbx
                      on  gbx.origOrderId = eo.OrderID
               left join OrderManagement..GroupStoreBkgReorderReductions gbr
                      on gbr.origOrderId = eo.OrderID      
                   where  1=1
                          and orderID >= @minOrderId1
                          --status = '' means order just loaded; hasn't been thru edits yet
                          --status = 'X' means order passed edits; ready to be exported to OMI
                          and (status = '' or cast(status as varbinary(1)) = cast('X' as varbinary(1)) )
                          and storeOverride <> '99' --used specifically to prevent order from getting caught by this process
                          and eo.store not in (995, 1995, 2995, 3995) --cash n carry
                group by  conf, eo.store, partition, xmitdate, xferFileId
                  having  count(item) >= 5
             )  as TODAY
INNER JOIN
             (    select  -- transmissions from past week
                          conf, eo.store, partition, xmitdate, xferFileId, ItmCnt= count(item), 
                          TotQty= sum(eo.qty + isnull(gbx.qty,0) + isnull(gbr.qty,0) )
                    from  epos_orders eo
               left join OrderManagement..GroupBookingXref gbx
                      on  gbx.origOrderId = eo.OrderID
               left join OrderManagement..GroupStoreBkgReorderReductions gbr
                      on gbr.origOrderId = eo.OrderID      
                   where  1=1
                          and orderID between @minOrderId2 and @minOrderId1 
                          and cast(status as varbinary(1)) <> cast('e' as varbinary(1))
                          and eo.store not in (995, 1995, 2995, 3995) --cash n carry
                group by  conf, eo.store, partition, xmitdate, xferFileId
                  having  count(item) >= 5
             )  as PASTWEEK
        ON      TODAY.store       = PASTWEEK.store
            and TODAY.[partition] = PASTWEEK.[partition]
            and TODAY.ItmCnt      = PASTWEEK.ItmCnt
            and TODAY.TotQty      = PASTWEEK.TotQty

     WHERE  PASTWEEK.xmitdate < TODAY.xmitdate

-- use for debugging
--select * from #dupes

-- ...an order is a duplicate IF: store, partition, item count & item qty are same
--    as the lastly submitted partition by the same store (this is the tricky part)

-- file must be the most recently submitted order prior to the current one
DELETE #dupes
  FROM #dupes d1
 WHERE 1=1
   AND  xmitdate2  <> (select max(d2.xmitdate2)
                         from #dupes d2
                        where d2.store = d1.store
                          and d2.partition1 = d1.partition1)

-- use for debugging
--select * from #dupes

-- If store has submitted at least 1 file w/same partition since the older file - exclude
DELETE #dupes
    FROM #dupes d1
     WHERE 1=1
     and exists (select top 1 *
                   from Epos_Orders eo
                  where 1=1
                    and eo.OrderID between @minOrderId2 and @minOrderId1 
                    and eo.store = d1.store
                    and eo.partition = d1.partition1
                    and eo.XmitDate between d1.XmitDate2 and d1.XmitDate1
                    and cast(eo.conf as int) <> d1.Conf1
                    and cast(eo.conf as int) <> d1.conf2
                    and cast(eo.status as varbinary(1)) <> cast('e' as varbinary(1))
                 )


--if running interactively, return the dupes
--insert #dupes
--    select 401, 7, 1212, '8/3/2016 08:15:00', 15, 99, 842010, 1212, '08/3/2016 07:30:00', 15, 99, 842004

select * from #dupes

drop table #dupes
------------------------------------------------------------------------------------------------

GO

running this way takes 2 seconds

             

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED 
SET NOCOUNT ON


CREATE TABLE #dupes
            (   rowId       int identity,
                store       int,
                partition1  varchar(3),
                Conf1       int,
                XmitDate1   datetime,
                ItmCnt1     int,
                TotQty1     int,
                XferFileId1 int,
                partition2  varchar(3),
                Conf2       int,
                XmitDate2   datetime,
                ItmCnt2     int,
                TotQty2     int,
                XferFileId2 int
            )

CREATE TABLE #today
            (   
                store       int,
                partition  varchar(3),
                Conf       int,
                XmitDate   datetime,
                ItmCnt     int,
                TotQty     int,
                XferFileId int
            )

CREATE TABLE #lastWeek
            (   
                store       int,
                partition  varchar(3),
                Conf       int,
                XmitDate   datetime,
                ItmCnt     int,
                TotQty     int,
                XferFileId int
            )


DECLARE @CurrentDate datetime, @minOrderId1 int, @minOrderId2 int

SET @CurrentDate = ags.dbo.uf_strip_time(GETDATE())
select @minOrderId1 = min(orderid) from epos_orders where xmitdate >= dateadd(dd,-2,@CurrentDate)  --back 2 days
select @minOrderId2 = min(orderid) from epos_orders where xmitdate >= dateadd(wk,-1,@CurrentDate)  --back 1 week

----------------------------------------------------------------------------------------------------------
-- identify possible dupes within all files that were just imported (status='' or 'X')
INSERT #Dupes
   SELECT
               store        = t1.store,
               partition1   = t1.partition,
               Conf1        = t1.conf,
               XmitDate1    = t1.xmitdate,
               ItmCnt1      = t1.ItmCnt,
               TotQty1      = t1.TotQty,
               XferFileId1  = t1.xferfileId,
               partition2   = t2.partition,
               Conf2        = t2.conf,
               XmitDate2    = t2.xmitdate,
               ItmCnt2      = t2.ItmCnt,
               TotQty2      = t2.TotQty,
               XferFileId2  = t2.xferfileId

     FROM


           (    select  -- transmissions not sent to OMI yet
                          xferfileId, conf, eo.store, partition, xmitdate, ItmCnt= count(item), TotQty= sum(qty)
                    from  epos_orders eo
                   where  1=1
                          and orderID >= @minOrderId1
                          --status = '' means order just loaded; hasn't been thru edits yet
                          --status = 'X' means order passed edits; ready to be exported to OMI
                          and (status = '' or cast(status as varbinary(1)) = cast('X' as varbinary(1)) )
                          and storeOverride <> '99' --used specifically to prevent order from getting caught by this process
                          and eo.store not in (995, 1995, 2995, 3995) --cash n carry
                group by  xferfileId, conf, eo.store, partition, xmitdate
                  having  count(item) >= 5
            ) T1
INNER JOIN
           (    select  -- transmissions not sent to OMI yet
                          xferfileId, conf, eo.store, partition, xmitdate, ItmCnt= count(item), TotQty= sum(qty)
                    from  epos_orders eo
                   where  1=1
                          and orderID >= @minOrderId1
                          --status = '' means order just loaded; hasn't been thru edits yet
                          --status = 'X' means order passed edits; ready to be exported to OMI
                          and (status = '' or cast(status as varbinary(1)) = cast('X' as varbinary(1)) )
                          and storeOverride <> '99' --used specifically to prevent order from getting caught by this process
                          and eo.store not in (995, 1995, 2995, 3995) --cash n carry
                group by  xferfileId, conf, eo.store, partition, xmitdate
                having count(item) >= 5
            ) T2
        ON  T1.store = t2.store
            and T1.ItmCnt = T2.ItmCnt
            and T1.TotQty = T2.TotQty
     WHERE  --1st condition: duplicate is in same file, different partition
            (T1.xferFileId = T2.xferFileId
             and T1.partition <> T2.partition)
             OR
             --2nd condition: duplicate is different file, same partition
            (T1.xferFileId <> T2.xferFileId
             and T1.partition = T2.partition)
             OR
             --3rd condition: duplicate is different file, different partition
            (T1.xferFileId <> T2.xferFileId
             and T1.partition <> T2.partition)
   ORDER BY t1.xferFileId, t1.store, t1.conf, T1.partition
    --select '1', * from #dupes

   delete #dupes
  from #dupes d1
 where exists (select *  from #dupes d2 
                where    --same file is in more than 1 row
                          (d1.XferFileId1 = d2.XferFileId1
                           and d1.partition2 = d2.partition1
                           and d2.rowId > d1.rowId)
                       OR --extra rows resulting from cartesian product
                          (d1.XferFileId1 = d2.XferFileId2
                           and d1.partition1 = d2.partition2
                           and d2.rowId > d1.rowId)
                      )


    --select '2', * from #dupes
-------------------------------------------------------


----------------------------------------------------------------------------------------------------------
        --identify possible dupes against files that have already been processed
        INSERT #today
            select  -- transmissions not sent to OMI yet
                    eo.store, partition, conf, xmitdate, ItmCnt= count(item), 
                    TotQty= sum(eo.qty + isnull(gbx.qty,0) + isnull(gbr.qty,0) ), xferFileId
             from  epos_orders eo
        left join OrderManagement..GroupBookingXref gbx
               on  gbx.origOrderId = eo.OrderID
        left join OrderManagement..GroupStoreBkgReorderReductions gbr
               on gbr.origOrderId = eo.OrderID      
            where  1=1
                    and orderID >= @minOrderId1
                    --status = '' means order just loaded; hasn't been thru edits yet
                    --status = 'X' means order passed edits; ready to be exported to OMI
                    and (status = '' or cast(status as varbinary(1)) = cast('X' as varbinary(1)) )
                    and storeOverride <> '99' --used specifically to prevent order from getting caught by this process
                    and eo.store not in (995, 1995, 2995, 3995) --cash n carry
        group by  conf, eo.store, partition, xmitdate, xferFileId
          having  count(item) >= 5
        --select * from #today order by store

        INSERT #lastWeek
                select  -- transmissions from past week
                        eo.store, partition, conf, xmitdate, ItmCnt= count(item), 
                        TotQty= sum(eo.qty + isnull(gbx.qty,0) + isnull(gbr.qty,0) ), xferFileId
                 from  epos_orders eo
            left join  OrderManagement..GroupBookingXref gbx
                   on  gbx.origOrderId = eo.OrderID
            left join  OrderManagement..GroupStoreBkgReorderReductions gbr
                   on  gbr.origOrderId = eo.OrderID      
                where  1=1
                       and orderID between @minOrderId2 and @minOrderId1 
                       and cast(status as varbinary(1)) <> cast('e' as varbinary(1))
                       and eo.store not in (995, 1995, 2995, 3995) --cash n carry
             group by  conf, eo.store, partition, xmitdate, xferFileId
               having  count(item) >= 5
        --select * from #lastWeek order by store

       INSERT #Dupes
               SELECT
                           store        = TODAY.store,
                           partition1  =  TODAY.partition,
                           Conf1        = TODAY.conf,
                           XmitDate1    = TODAY.xmitdate,
                           ItmCnt1      = TODAY.ItmCnt,
                           TotQty1      = TODAY.TotQty,
                           XferFileId1  = TODAY.xferFileId,
                           partition2   = PASTWEEK.partition,
                           Conf2        = PASTWEEK.conf,
                           XmitDate2    = PASTWEEK.xmitdate,
                           ItmCnt2      = PASTWEEK.ItmCnt,
                           TotQty2      = PASTWEEK.TotQty,
                           XferFileId2  = PASTWEEK.xferFileId

                 FROM #today    today
           INNER JOIN #lastWeek pastweek
                ON      TODAY.store       = PASTWEEK.store
                    and TODAY.[partition] = PASTWEEK.[partition]
                    and TODAY.ItmCnt      = PASTWEEK.ItmCnt
                    and TODAY.TotQty      = PASTWEEK.TotQty
                 WHERE  PASTWEEK.xmitdate < TODAY.xmitdate

        --select '3', * from #dupes

        -- file must be the most recently submitted order prior to the current one
        DELETE #dupes
          FROM #dupes d1
         WHERE 1=1
           AND  xmitdate2  <> (select max(d2.xmitdate2)
                                 from #dupes d2
                                where d2.store = d1.store
                                  and d2.partition1 = d1.partition1)

        -- use for debugging
        --select '4', * from #dupes

        -- If store has submitted at least 1 file w/same partition since the older file - exclude
        DELETE #dupes
            FROM #dupes d1
             WHERE 1=1
             and exists (select top 1 *
                           from Epos_Orders eo
                          where 1=1
                            and eo.OrderID between @minOrderId2 and @minOrderId1 
                            and eo.store = d1.store
                            and eo.partition = d1.partition1
                            and eo.XmitDate between d1.XmitDate2 and d1.XmitDate1
                            and cast(eo.conf as int) <> d1.Conf1
                            and cast(eo.conf as int) <> d1.conf2
                            and cast(eo.status as varbinary(1)) <> cast('e' as varbinary(1))
                         )
----------------------------------------------------------------------------------------------------------


select * from #dupes

DROP TABLE #today
DROP TABLE #lastweek
DROP TABLE #dupes
GO

We would prefer it to run the first way but it gets hung up in the Insert #Dupes. We have looked at indexes and all are fine. We have run update_statistics just to be sure we were getting the best execution plan. We have changed cost threshold for parallelism from 5 to 10 to 25 to 50 and no effect on the execution plan. Anyone have any suggestions?

named pipes provider: could not open a connection to sql server 53

$
0
0

I encountered the following error:

named pipes provider: could not open a connection to sql server 53.

this error was raised for some 15 minutes and it seemed that the database operation was halted during that time.

surprisingly, this error was written in the application error log table through same connection in the same sql server.

i checked the sql server error log but no error was logged.

i could not figure out. please help.

Tepdb drive is completly filled post Index rebuild operation

$
0
0

Hi Team,

Can you please help to understand the tempdb issue. we have a server and using Seperate SSD drive for tempdb

we had a Index rebuild activity on SQL server 2016 Enterprise edition ater the operation Tempdb drives gradullay increased 

and never used the sort_in tempdb option equal to on. How to troubleshoot the issue?

Regards,

Nasar.

Stored Procedure Usage

$
0
0

Hi,

I am running query using sys.dm_exec_procedure_stats to get the stored procedure usage report and little confused that it shows latest time like today but Execution count is NULL so what i should understand?

Is it Not using Stored Procedure at all since Last Sql server Restarted?

How I should interpreting Last_Execution_Time and execution_count columns as 4/1/2020 NULL
I was reading definitions but h=got confused:

Last_execution_time Last time at which the plan started executing
last time the execution plan was used to execute a query
execution_count Number of times that the plan has been executed since it was last compiled
number of times the plan has been used to execute a query

Thanks


do's and dont's of backup and restore

$
0
0

hi, years ago when i did my last backup i think there were some rules about the backup going to a different drive than where the db resides and restores coming from a different drive.  

i just backed up my 112 gig db from my c: drive to my e: drive.  i'll find out what my e: drive really is but "explorer" says its local and "storage" only knows about my c: drive.

if i need to do a restore, is there anything i can mitigate now? will i be able to run a restore from a different drive?  

Find a better way/solution on how to handle DB locks

$
0
0

How to handle DB locks in SQL Server 2016.

Like kind of alerts or some solution for automatically release locks after some threshold?

Please help me.

Query on Linked Server in SQL Active-Passive

$
0
0
Hi All,

We have plan to implement Active-Passive SQL server cluster with shared storage with SQL 2014 version.
A-P: There will be two DB servers (DB1 and DB2) and one will be working (DB1) and the other (DB2) does not operate until there is any problem in the first one. Once the working cluster SQL server fails (fail – reboot/hardware failure/OS corruption, etc.), the other server (DB2) takes over from it and starts functioning as regular SQL server.

Just I want to know behaviour of linked server in A-P environmnet during below cases.
1. We have configured linked server on DB1 (active node) to fetch remote SQL data. If we perform failover from DB1-DB2, Will linked server work automatically on DB2 active node? or Is there any manual modifications are required to work linked server properly?
2. When we failback from DB2 to DB1, Will linked server work automatically on DB1 active node? or Is there any manual modifications are required to work linked server properly?
3. Can we use SQL Virtual IP in linked server configuration in A-P environment?

Database '' already exists. Choose a different database name. But it doesn't!

$
0
0

When I try create a database I get:

Msg 1801, Level 16, State 3, Line 1
Database 'XXXX' already exists. Choose a different database name.

I then try drop the database but get:

Msg 3701, Level 11, State 11, Line 8
Cannot drop the database 'XXXX', because it does not exist or you do not have permission.

The database did exist previously. It's not currently listed in sys.databases.
Microsoft SQL Server 2017 (RTM-CU16) (KB4508218) - 14.0.3223.3 (X64)

I've had this before and was able to create the database after restarting SQL but I cannot keep restarting the server.

SQL Agent Jobs - why running when not supposed to?

$
0
0

Hello,

We have a SQL Server 2012 VM running in our DEV environment.  One of our sysadmins created a backup of this VM and restored as "DEV2" just to test out the DR process.  I logged onto to DEV2 and everything seemed to be fine.  One thing I observed was some of the sql agent jobs started to run (as scheduled) on DEV2.  So I used tsql to disabled all of the jobs.  However I noticed I was getting job failure notifications via email which I thought was odd since all the jobs had already been disabled and I confirmed it this.  When I view the job log history, indeed I saw recent dates and they in fact failed.  My question is: why these jobs continue to run on DEV2?  

When I ran these tsql's on DEV2:

select @@SERVERNAME     -> output "DEV" is the original DEV sql server

SELECT CONVERT(sysname, SERVERPROPERTY('servername'));    -> output "DEV2"

Obvisously there's a discrepancy here.   Does this have something to do with why agent jobs ran on DEV2?

Thanks

Viewing all 10269 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>